=== RUN TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd
=== CONT TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-761710 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-761710 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-761710 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-761710 --alsologtostderr -v=1] stderr:
I1019 16:29:23.311919 53008 out.go:360] Setting OutFile to fd 1 ...
I1019 16:29:23.312203 53008 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 16:29:23.312216 53008 out.go:374] Setting ErrFile to fd 2...
I1019 16:29:23.312222 53008 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 16:29:23.312462 53008 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3708/.minikube/bin
I1019 16:29:23.312711 53008 mustload.go:66] Loading cluster: functional-761710
I1019 16:29:23.313041 53008 config.go:182] Loaded profile config "functional-761710": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1019 16:29:23.313447 53008 cli_runner.go:164] Run: docker container inspect functional-761710 --format={{.State.Status}}
I1019 16:29:23.332026 53008 host.go:66] Checking if "functional-761710" exists ...
I1019 16:29:23.332388 53008 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1019 16:29:23.391563 53008 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-19 16:29:23.381679912 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1019 16:29:23.391680 53008 api_server.go:166] Checking apiserver status ...
I1019 16:29:23.391722 53008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1019 16:29:23.391762 53008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-761710
I1019 16:29:23.410216 53008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-3708/.minikube/machines/functional-761710/id_rsa Username:docker}
I1019 16:29:23.516188 53008 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5104/cgroup
W1019 16:29:23.526030 53008 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5104/cgroup: Process exited with status 1
stdout:
stderr:
I1019 16:29:23.526089 53008 ssh_runner.go:195] Run: ls
I1019 16:29:23.530237 53008 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I1019 16:29:23.535352 53008 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W1019 16:29:23.535400 53008 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1019 16:29:23.535581 53008 config.go:182] Loaded profile config "functional-761710": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1019 16:29:23.535595 53008 addons.go:70] Setting dashboard=true in profile "functional-761710"
I1019 16:29:23.535603 53008 addons.go:239] Setting addon dashboard=true in "functional-761710"
I1019 16:29:23.535635 53008 host.go:66] Checking if "functional-761710" exists ...
I1019 16:29:23.536189 53008 cli_runner.go:164] Run: docker container inspect functional-761710 --format={{.State.Status}}
I1019 16:29:23.558162 53008 out.go:179] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1019 16:29:23.559503 53008 out.go:179] - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1019 16:29:23.560814 53008 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1019 16:29:23.560835 53008 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1019 16:29:23.560910 53008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-761710
I1019 16:29:23.583234 53008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-3708/.minikube/machines/functional-761710/id_rsa Username:docker}
I1019 16:29:23.688166 53008 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1019 16:29:23.688194 53008 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1019 16:29:23.701768 53008 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1019 16:29:23.701794 53008 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1019 16:29:23.714692 53008 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1019 16:29:23.714720 53008 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1019 16:29:23.728751 53008 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1019 16:29:23.728773 53008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1019 16:29:23.742180 53008 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
I1019 16:29:23.742221 53008 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1019 16:29:23.755321 53008 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1019 16:29:23.755354 53008 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1019 16:29:23.767832 53008 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1019 16:29:23.767854 53008 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1019 16:29:23.780495 53008 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1019 16:29:23.780521 53008 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1019 16:29:23.793002 53008 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1019 16:29:23.793023 53008 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1019 16:29:23.806408 53008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1019 16:29:24.275772 53008 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p functional-761710 addons enable metrics-server
I1019 16:29:24.277180 53008 addons.go:202] Writing out "functional-761710" config to set dashboard=true...
W1019 16:29:24.277385 53008 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1019 16:29:24.278078 53008 kapi.go:59] client config for functional-761710: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-3708/.minikube/profiles/functional-761710/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-3708/.minikube/profiles/functional-761710/client.key", CAFile:"/home/jenkins/minikube-integration/21683-3708/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819d20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1019 16:29:24.278601 53008 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1019 16:29:24.278623 53008 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1019 16:29:24.278629 53008 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1019 16:29:24.278635 53008 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1019 16:29:24.278644 53008 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1019 16:29:24.286985 53008 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard kubernetes-dashboard b92c07ce-030a-44b1-8a04-35efeab7c5ec 738 0 2025-10-19 16:29:24 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-10-19 16:29:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.105.88.97,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.105.88.97],IPFamilies:[IPv4],AllocateLoadBalancerN
odePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1019 16:29:24.287159 53008 out.go:285] * Launching proxy ...
* Launching proxy ...
I1019 16:29:24.287224 53008 dashboard.go:154] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-761710 proxy --port 36195]
I1019 16:29:24.287515 53008 dashboard.go:159] Waiting for kubectl to output host:port ...
I1019 16:29:24.337241 53008 dashboard.go:177] proxy stdout: Starting to serve on 127.0.0.1:36195
W1019 16:29:24.337320 53008 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I1019 16:29:24.346822 53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[71d05a66-2e37-4bb7-b8dd-1d4abfb1d156] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:24 GMT]] Body:0xc000c7a980 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000206280 TLS:<nil>}
I1019 16:29:24.346903 53008 retry.go:31] will retry after 124.601µs: Temporary Error: unexpected response code: 503
I1019 16:29:24.350558 53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1d455a3a-fc20-4064-8b73-519b6e29cde5] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:24 GMT]] Body:0xc000c7aa40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002063c0 TLS:<nil>}
I1019 16:29:24.350615 53008 retry.go:31] will retry after 216.393µs: Temporary Error: unexpected response code: 503
I1019 16:29:24.356277 53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4d1c3f41-e43e-48c7-b209-cd738ade4d50] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:24 GMT]] Body:0xc000ce9c40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000206640 TLS:<nil>}
I1019 16:29:24.356321 53008 retry.go:31] will retry after 301.953µs: Temporary Error: unexpected response code: 503
I1019 16:29:24.359708 53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7ed7fdc7-8e6f-49c4-b561-1d9809e281ac] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:24 GMT]] Body:0xc000ce9d80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000317180 TLS:<nil>}
I1019 16:29:24.359763 53008 retry.go:31] will retry after 357.191µs: Temporary Error: unexpected response code: 503
I1019 16:29:24.363243 53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7c7b80a7-433c-4b5e-9584-33a8ecaec207] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:24 GMT]] Body:0xc000c7ab40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003172c0 TLS:<nil>}
I1019 16:29:24.363299 53008 retry.go:31] will retry after 597.975µs: Temporary Error: unexpected response code: 503
I1019 16:29:24.366911 53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[41ab07bf-2eca-4260-ae84-7bccc89c444a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:24 GMT]] Body:0xc000ce9e80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000206780 TLS:<nil>}
I1019 16:29:24.366957 53008 retry.go:31] will retry after 1.039229ms: Temporary Error: unexpected response code: 503
I1019 16:29:24.370288 53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[59bb9d39-67ee-4e8a-a3fc-c12a4d61f5c0] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:24 GMT]] Body:0xc000c7ac40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000317400 TLS:<nil>}
I1019 16:29:24.370374 53008 retry.go:31] will retry after 1.110691ms: Temporary Error: unexpected response code: 503
I1019 16:29:24.375233 53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[acace3e2-5b11-497b-8943-3457112dadd6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:24 GMT]] Body:0xc0015fa180 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002068c0 TLS:<nil>}
I1019 16:29:24.375282 53008 retry.go:31] will retry after 1.629066ms: Temporary Error: unexpected response code: 503
I1019 16:29:24.379590 53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[752c84f4-adca-4c8c-b088-379ff1e45469] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:24 GMT]] Body:0xc000c7ad00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0015fc000 TLS:<nil>}
I1019 16:29:24.379644 53008 retry.go:31] will retry after 1.968417ms: Temporary Error: unexpected response code: 503
I1019 16:29:24.384412 53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0be8d5e9-2a60-4c78-8c98-b4ef93e0fed4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:24 GMT]] Body:0xc0015fa2c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000206dc0 TLS:<nil>}
I1019 16:29:24.384459 53008 retry.go:31] will retry after 3.21246ms: Temporary Error: unexpected response code: 503
I1019 16:29:24.390812 53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[eea1d252-01b3-4884-b760-34ba6d14cde8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:24 GMT]] Body:0xc00175e040 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0015fc140 TLS:<nil>}
I1019 16:29:24.390858 53008 retry.go:31] will retry after 3.698122ms: Temporary Error: unexpected response code: 503
I1019 16:29:24.397477 53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[24077880-0fd2-4fc3-9660-8c8de46c0b5f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:24 GMT]] Body:0xc0015fa3c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000317540 TLS:<nil>}
I1019 16:29:24.397526 53008 retry.go:31] will retry after 4.928639ms: Temporary Error: unexpected response code: 503
I1019 16:29:24.404902 53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[62b75097-b006-4967-bb08-e3585a104a75] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:24 GMT]] Body:0xc000c7ae00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0015fc280 TLS:<nil>}
I1019 16:29:24.404953 53008 retry.go:31] will retry after 11.274983ms: Temporary Error: unexpected response code: 503
I1019 16:29:24.419022 53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6d7c8c8d-4160-40cb-af51-37a0f79dafff] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:24 GMT]] Body:0xc0015fa4c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000206f00 TLS:<nil>}
I1019 16:29:24.419096 53008 retry.go:31] will retry after 28.550556ms: Temporary Error: unexpected response code: 503
I1019 16:29:24.452621 53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3e4dd4ae-fd6c-468b-a97b-78a7c75531a4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:24 GMT]] Body:0xc000c7af00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0015fc3c0 TLS:<nil>}
I1019 16:29:24.452675 53008 retry.go:31] will retry after 38.345769ms: Temporary Error: unexpected response code: 503
I1019 16:29:24.494668 53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[09975259-8a72-40c2-9994-48a57c947fdd] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:24 GMT]] Body:0xc0015fa5c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207040 TLS:<nil>}
I1019 16:29:24.494737 53008 retry.go:31] will retry after 53.318715ms: Temporary Error: unexpected response code: 503
I1019 16:29:24.551710 53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[493ab912-05bb-4326-955c-109a4c067f99] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:24 GMT]] Body:0xc000c7b000 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0015fc500 TLS:<nil>}
I1019 16:29:24.551811 53008 retry.go:31] will retry after 60.453558ms: Temporary Error: unexpected response code: 503
I1019 16:29:24.616106 53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[048fe5d4-23c3-45de-bdaf-10640fd8e2d6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:24 GMT]] Body:0xc0015fa6c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207180 TLS:<nil>}
I1019 16:29:24.616178 53008 retry.go:31] will retry after 86.53404ms: Temporary Error: unexpected response code: 503
I1019 16:29:24.706040 53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1736325f-6788-45c0-b976-cff92c0a55f8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:24 GMT]] Body:0xc000c7b100 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0015fc640 TLS:<nil>}
I1019 16:29:24.706123 53008 retry.go:31] will retry after 128.275809ms: Temporary Error: unexpected response code: 503
I1019 16:29:24.837956 53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[abfd279e-3bba-441a-adbe-bf3480fd4de0] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:24 GMT]] Body:0xc0015fa7c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002072c0 TLS:<nil>}
I1019 16:29:24.838029 53008 retry.go:31] will retry after 216.827935ms: Temporary Error: unexpected response code: 503
I1019 16:29:25.058316 53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[222376b3-c476-4950-9727-dee579917bf1] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:25 GMT]] Body:0xc00175e140 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0015fc780 TLS:<nil>}
I1019 16:29:25.058380 53008 retry.go:31] will retry after 328.502909ms: Temporary Error: unexpected response code: 503
I1019 16:29:25.390844 53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[dd5ff30f-4a58-4ef4-92c5-464a07c3c875] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:25 GMT]] Body:0xc000c7b200 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000317680 TLS:<nil>}
I1019 16:29:25.390899 53008 retry.go:31] will retry after 315.416144ms: Temporary Error: unexpected response code: 503
I1019 16:29:25.710501 53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[29df82db-fe4a-45cc-8311-4aa0d387503a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:25 GMT]] Body:0xc0015fa900 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207400 TLS:<nil>}
I1019 16:29:25.710570 53008 retry.go:31] will retry after 571.57826ms: Temporary Error: unexpected response code: 503
I1019 16:29:26.286204 53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[73695916-e011-4610-bf9a-89abf3c95437] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:26 GMT]] Body:0xc00175e200 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0015fc8c0 TLS:<nil>}
I1019 16:29:26.286261 53008 retry.go:31] will retry after 1.546802408s: Temporary Error: unexpected response code: 503
I1019 16:29:27.836557 53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[79af8d7f-f2a1-4c2b-983c-de13a19e66fe] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:27 GMT]] Body:0xc000c7b300 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003177c0 TLS:<nil>}
I1019 16:29:27.836650 53008 retry.go:31] will retry after 2.194466349s: Temporary Error: unexpected response code: 503
I1019 16:29:30.034620 53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f2ed78fb-ff56-43c0-856a-611054b0c8dd] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:30 GMT]] Body:0xc00175e300 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207540 TLS:<nil>}
I1019 16:29:30.034674 53008 retry.go:31] will retry after 3.620479271s: Temporary Error: unexpected response code: 503
I1019 16:29:33.658946 53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[de89b6f8-4876-4818-8c33-d677864dab11] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:33 GMT]] Body:0xc00175e3c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000317900 TLS:<nil>}
I1019 16:29:33.659016 53008 retry.go:31] will retry after 3.791866852s: Temporary Error: unexpected response code: 503
I1019 16:29:37.456424 53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[62432db4-17c0-4dbb-b112-2a5090d7df7f] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:37 GMT]] Body:0xc000c7b4c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207680 TLS:<nil>}
I1019 16:29:37.456494 53008 retry.go:31] will retry after 8.456579226s: Temporary Error: unexpected response code: 503
I1019 16:29:45.917177 53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b5848962-095e-4b4f-a53e-2f4624fd5041] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:45 GMT]] Body:0xc00175e440 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207900 TLS:<nil>}
I1019 16:29:45.917243 53008 retry.go:31] will retry after 4.606413788s: Temporary Error: unexpected response code: 503
I1019 16:29:50.527988 53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a75d02fa-4e71-453e-8309-4f2c9f485e1a] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:29:50 GMT]] Body:0xc0015faa00 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000317a40 TLS:<nil>}
I1019 16:29:50.528078 53008 retry.go:31] will retry after 18.018986578s: Temporary Error: unexpected response code: 503
I1019 16:30:08.550756 53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[66505db9-b73d-4632-bfae-e7037909b1dc] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:30:08 GMT]] Body:0xc00175e540 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0015fca00 TLS:<nil>}
I1019 16:30:08.550817 53008 retry.go:31] will retry after 11.141916924s: Temporary Error: unexpected response code: 503
I1019 16:30:19.696401 53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3f3bcbe4-7b4e-4b5d-893d-f6341696a1c5] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:30:19 GMT]] Body:0xc0015fab00 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207cc0 TLS:<nil>}
I1019 16:30:19.696470 53008 retry.go:31] will retry after 35.35287487s: Temporary Error: unexpected response code: 503
I1019 16:30:55.052692 53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0d16d20f-d7fb-4bb2-9c69-2b72e3b4a1fb] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:30:55 GMT]] Body:0xc000c7b640 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000317b80 TLS:<nil>}
I1019 16:30:55.052766 53008 retry.go:31] will retry after 26.807980584s: Temporary Error: unexpected response code: 503
I1019 16:31:21.867231 53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1a050042-1f2c-4fc7-8e73-adc0c676ac2f] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:31:21 GMT]] Body:0xc0015fab80 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00182e000 TLS:<nil>}
I1019 16:31:21.867324 53008 retry.go:31] will retry after 35.226434368s: Temporary Error: unexpected response code: 503
I1019 16:31:57.099119 53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[69471167-6b04-4290-9adf-f6ead5c0f67c] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:31:57 GMT]] Body:0xc00175e040 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00182e140 TLS:<nil>}
I1019 16:31:57.099202 53008 retry.go:31] will retry after 58.535539436s: Temporary Error: unexpected response code: 503
I1019 16:32:55.638541 53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[06ffcdab-4d26-44dc-87d1-75bfbe0acabd] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:32:55 GMT]] Body:0xc00083a0c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000718280 TLS:<nil>}
I1019 16:32:55.638619 53008 retry.go:31] will retry after 43.048478163s: Temporary Error: unexpected response code: 503
I1019 16:33:38.691388 53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c8b0dbae-47e7-4d33-bd96-9be621cac567] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:33:38 GMT]] Body:0xc00083a0c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0007183c0 TLS:<nil>}
I1019 16:33:38.691519 53008 retry.go:31] will retry after 32.789020471s: Temporary Error: unexpected response code: 503
I1019 16:34:11.486102 53008 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4c71d796-bbd5-42b4-84fa-a3e85c903835] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 19 Oct 2025 16:34:11 GMT]] Body:0xc000c7a180 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000718a00 TLS:<nil>}
I1019 16:34:11.486183 53008 retry.go:31] will retry after 1m15.447738855s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run: docker inspect functional-761710
helpers_test.go:243: (dbg) docker inspect functional-761710:
-- stdout --
[
{
"Id": "2b71f09d1a45844908d31cf333a40b01e94292aff4b985dd75ace2895a23ae06",
"Created": "2025-10-19T16:27:26.539472688Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 40398,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-10-19T16:27:26.572261738Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
"ResolvConfPath": "/var/lib/docker/containers/2b71f09d1a45844908d31cf333a40b01e94292aff4b985dd75ace2895a23ae06/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/2b71f09d1a45844908d31cf333a40b01e94292aff4b985dd75ace2895a23ae06/hostname",
"HostsPath": "/var/lib/docker/containers/2b71f09d1a45844908d31cf333a40b01e94292aff4b985dd75ace2895a23ae06/hosts",
"LogPath": "/var/lib/docker/containers/2b71f09d1a45844908d31cf333a40b01e94292aff4b985dd75ace2895a23ae06/2b71f09d1a45844908d31cf333a40b01e94292aff4b985dd75ace2895a23ae06-json.log",
"Name": "/functional-761710",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"functional-761710:/var",
"/lib/modules:/lib/modules:ro"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {
"max-size": "100m"
}
},
"NetworkMode": "functional-761710",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "private",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 4294967296,
"NanoCpus": 0,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 8589934592,
"MemorySwappiness": null,
"OomKillDisable": null,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "2b71f09d1a45844908d31cf333a40b01e94292aff4b985dd75ace2895a23ae06",
"LowerDir": "/var/lib/docker/overlay2/b581ee1a1a38fd53e589742cc7f8edeb245f8ad3ab646d27d505915144d66825-init/diff:/var/lib/docker/overlay2/679788dc5d6c9ac02347cc41d6b5035c8cb9d202024310ee3487f11ae7ab51e7/diff",
"MergedDir": "/var/lib/docker/overlay2/b581ee1a1a38fd53e589742cc7f8edeb245f8ad3ab646d27d505915144d66825/merged",
"UpperDir": "/var/lib/docker/overlay2/b581ee1a1a38fd53e589742cc7f8edeb245f8ad3ab646d27d505915144d66825/diff",
"WorkDir": "/var/lib/docker/overlay2/b581ee1a1a38fd53e589742cc7f8edeb245f8ad3ab646d27d505915144d66825/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "functional-761710",
"Source": "/var/lib/docker/volumes/functional-761710/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "functional-761710",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8441/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "functional-761710",
"name.minikube.sigs.k8s.io": "functional-761710",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "a457919bd01d830b11d02904d6fe8de312217e4919369ee669c20e6baa2ba71b",
"SandboxKey": "/var/run/docker/netns/a457919bd01d",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32783"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32784"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32787"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32785"
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32786"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"functional-761710": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "1a:39:4f:f2:39:04",
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "2dcae6e32a0f28d53c1d5609c5e0a2ce3b8ab39e083e1023c0a46d3a121e7012",
"EndpointID": "ea854562658c4c75f61db0f332c5de8cfb4e2ba638e8f3fd23b74a9fec2436e3",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"functional-761710",
"2b71f09d1a45"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p functional-761710 -n functional-761710
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run: out/minikube-linux-amd64 -p functional-761710 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-761710 logs -n 25: (1.227030033s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs:
-- stdout --
==> Audit <==
┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ ssh │ functional-761710 ssh findmnt -T /mount3 │ functional-761710 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
│ mount │ -p functional-761710 --kill=true │ functional-761710 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ │
│ ssh │ functional-761710 ssh sudo cat /etc/test/nested/copy/7254/hosts │ functional-761710 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
│ cp │ functional-761710 cp testdata/cp-test.txt /home/docker/cp-test.txt │ functional-761710 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
│ ssh │ functional-761710 ssh -n functional-761710 sudo cat /home/docker/cp-test.txt │ functional-761710 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
│ cp │ functional-761710 cp functional-761710:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd124758359/001/cp-test.txt │ functional-761710 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
│ ssh │ functional-761710 ssh -n functional-761710 sudo cat /home/docker/cp-test.txt │ functional-761710 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
│ cp │ functional-761710 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt │ functional-761710 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
│ ssh │ functional-761710 ssh -n functional-761710 sudo cat /tmp/does/not/exist/cp-test.txt │ functional-761710 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
│ ssh │ functional-761710 ssh sudo cat /etc/ssl/certs/7254.pem │ functional-761710 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
│ ssh │ functional-761710 ssh sudo cat /usr/share/ca-certificates/7254.pem │ functional-761710 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
│ ssh │ functional-761710 ssh sudo cat /etc/ssl/certs/51391683.0 │ functional-761710 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
│ ssh │ functional-761710 ssh sudo cat /etc/ssl/certs/72542.pem │ functional-761710 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
│ ssh │ functional-761710 ssh sudo cat /usr/share/ca-certificates/72542.pem │ functional-761710 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
│ ssh │ functional-761710 ssh sudo cat /etc/ssl/certs/3ec20f2e.0 │ functional-761710 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
│ image │ functional-761710 image ls --format short --alsologtostderr │ functional-761710 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
│ image │ functional-761710 image ls --format yaml --alsologtostderr │ functional-761710 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
│ ssh │ functional-761710 ssh pgrep buildkitd │ functional-761710 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ │
│ image │ functional-761710 image ls --format json --alsologtostderr │ functional-761710 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
│ image │ functional-761710 image ls --format table --alsologtostderr │ functional-761710 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
│ image │ functional-761710 image build -t localhost/my-image:functional-761710 testdata/build --alsologtostderr │ functional-761710 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
│ update-context │ functional-761710 update-context --alsologtostderr -v=2 │ functional-761710 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
│ update-context │ functional-761710 update-context --alsologtostderr -v=2 │ functional-761710 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
│ update-context │ functional-761710 update-context --alsologtostderr -v=2 │ functional-761710 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
│ image │ functional-761710 image ls │ functional-761710 │ jenkins │ v1.37.0 │ 19 Oct 25 16:29 UTC │ 19 Oct 25 16:29 UTC │
└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/10/19 16:29:23
Running on machine: ubuntu-20-agent-6
Binary: Built with gc go1.24.6 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1019 16:29:23.061173 52761 out.go:360] Setting OutFile to fd 1 ...
I1019 16:29:23.061415 52761 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 16:29:23.061424 52761 out.go:374] Setting ErrFile to fd 2...
I1019 16:29:23.061428 52761 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 16:29:23.061662 52761 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-3708/.minikube/bin
I1019 16:29:23.062133 52761 out.go:368] Setting JSON to false
I1019 16:29:23.063238 52761 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":705,"bootTime":1760890658,"procs":264,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1019 16:29:23.063316 52761 start.go:143] virtualization: kvm guest
I1019 16:29:23.065319 52761 out.go:179] * [functional-761710] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
I1019 16:29:23.066783 52761 notify.go:221] Checking for updates...
I1019 16:29:23.066794 52761 out.go:179] - MINIKUBE_LOCATION=21683
I1019 16:29:23.068249 52761 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1019 16:29:23.069973 52761 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/21683-3708/kubeconfig
I1019 16:29:23.071299 52761 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-3708/.minikube
I1019 16:29:23.072799 52761 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I1019 16:29:23.074201 52761 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1019 16:29:23.076001 52761 config.go:182] Loaded profile config "functional-761710": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1019 16:29:23.076536 52761 driver.go:422] Setting default libvirt URI to qemu:///system
I1019 16:29:23.104147 52761 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
I1019 16:29:23.104327 52761 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1019 16:29:23.182350 52761 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-19 16:29:23.169590828 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1019 16:29:23.182539 52761 docker.go:319] overlay module found
I1019 16:29:23.184086 52761 out.go:179] * Using the docker driver based on existing profile
I1019 16:29:23.185550 52761 start.go:309] selected driver: docker
I1019 16:29:23.185570 52761 start.go:930] validating driver "docker" against &{Name:functional-761710 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-761710 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1019 16:29:23.185696 52761 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1019 16:29:23.185803 52761 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1019 16:29:23.253848 52761 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-19 16:29:23.24165136 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1019 16:29:23.254460 52761 cni.go:84] Creating CNI manager for ""
I1019 16:29:23.254515 52761 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1019 16:29:23.254562 52761 start.go:353] cluster config:
{Name:functional-761710 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-761710 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1019 16:29:23.256550 52761 out.go:179] * dry-run validation complete!
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
2f5a7af658f76 07ccdb7838758 4 minutes ago Running myfrontend 0 5db0711b569bb sp-pod default
f1a44b7013fc5 56cc512116c8f 4 minutes ago Exited mount-munger 0 5903cc5f08550 busybox-mount default
01377ea863bb5 5e7abcdd20216 5 minutes ago Running nginx 0 bd4ac4e508b8c nginx-svc default
950283f0ddd2a 9056ab77afb8e 5 minutes ago Running echo-server 0 08d09ecc604fc hello-node-75c85bcc94-5nt4t default
bbbd1ab2885a0 9056ab77afb8e 5 minutes ago Running echo-server 0 d943eac4fe226 hello-node-connect-7d85dfc575-4w6jk default
08973f3b34332 c3994bc696102 5 minutes ago Running kube-apiserver 0 5f3327a24a4b5 kube-apiserver-functional-761710 kube-system
1462b09dbe799 7dd6aaa1717ab 5 minutes ago Running kube-scheduler 1 1b8149a8c3599 kube-scheduler-functional-761710 kube-system
4973497bd1c42 c80c8dbafe7dd 5 minutes ago Running kube-controller-manager 1 79c59ac0302e5 kube-controller-manager-functional-761710 kube-system
e85bee2c96edb 5f1f5298c888d 5 minutes ago Running etcd 1 00da85ec4a331 etcd-functional-761710 kube-system
c968c41169b99 6e38f40d628db 6 minutes ago Running storage-provisioner 1 5760eea995196 storage-provisioner kube-system
f34e350aeb779 409467f978b4a 6 minutes ago Running kindnet-cni 1 a88f299a1e3f7 kindnet-l9dts kube-system
7c27f34b9f2fd fc25172553d79 6 minutes ago Running kube-proxy 1 de9f1d19b6030 kube-proxy-ffw5j kube-system
a4020f1f6d2fc 52546a367cc9e 6 minutes ago Running coredns 1 f708417819904 coredns-66bc5c9577-mcw9m kube-system
db0ccb1d66c53 52546a367cc9e 6 minutes ago Exited coredns 0 f708417819904 coredns-66bc5c9577-mcw9m kube-system
0a88ac387625b 6e38f40d628db 6 minutes ago Exited storage-provisioner 0 5760eea995196 storage-provisioner kube-system
1cda6a7dc16b1 409467f978b4a 6 minutes ago Exited kindnet-cni 0 a88f299a1e3f7 kindnet-l9dts kube-system
7c91085af0ff8 fc25172553d79 6 minutes ago Exited kube-proxy 0 de9f1d19b6030 kube-proxy-ffw5j kube-system
1b3ed669750ce 5f1f5298c888d 6 minutes ago Exited etcd 0 00da85ec4a331 etcd-functional-761710 kube-system
c4a059002b214 c80c8dbafe7dd 6 minutes ago Exited kube-controller-manager 0 79c59ac0302e5 kube-controller-manager-functional-761710 kube-system
d7d06b0ece08f 7dd6aaa1717ab 6 minutes ago Exited kube-scheduler 0 1b8149a8c3599 kube-scheduler-functional-761710 kube-system
==> containerd <==
Oct 19 16:31:01 functional-761710 containerd[3839]: time="2025-10-19T16:31:01.542421140Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
Oct 19 16:31:01 functional-761710 containerd[3839]: time="2025-10-19T16:31:01.544299691Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
Oct 19 16:31:02 functional-761710 containerd[3839]: time="2025-10-19T16:31:02.125829505Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
Oct 19 16:31:03 functional-761710 containerd[3839]: time="2025-10-19T16:31:03.782751402Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Oct 19 16:31:03 functional-761710 containerd[3839]: time="2025-10-19T16:31:03.782845579Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11047"
Oct 19 16:31:03 functional-761710 containerd[3839]: time="2025-10-19T16:31:03.783574652Z" level=info msg="PullImage \"docker.io/mysql:5.7\""
Oct 19 16:31:03 functional-761710 containerd[3839]: time="2025-10-19T16:31:03.785061005Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
Oct 19 16:31:04 functional-761710 containerd[3839]: time="2025-10-19T16:31:04.375355509Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
Oct 19 16:31:06 functional-761710 containerd[3839]: time="2025-10-19T16:31:06.019641333Z" level=error msg="PullImage \"docker.io/mysql:5.7\" failed" error="failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Oct 19 16:31:06 functional-761710 containerd[3839]: time="2025-10-19T16:31:06.019679842Z" level=info msg="stop pulling image docker.io/library/mysql:5.7: active requests=0, bytes read=10967"
Oct 19 16:32:25 functional-761710 containerd[3839]: time="2025-10-19T16:32:25.540993965Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
Oct 19 16:32:25 functional-761710 containerd[3839]: time="2025-10-19T16:32:25.542798845Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
Oct 19 16:32:26 functional-761710 containerd[3839]: time="2025-10-19T16:32:26.134805521Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
Oct 19 16:32:28 functional-761710 containerd[3839]: time="2025-10-19T16:32:28.137459029Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Oct 19 16:32:28 functional-761710 containerd[3839]: time="2025-10-19T16:32:28.137551428Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=12709"
Oct 19 16:32:31 functional-761710 containerd[3839]: time="2025-10-19T16:32:31.541373580Z" level=info msg="PullImage \"docker.io/mysql:5.7\""
Oct 19 16:32:31 functional-761710 containerd[3839]: time="2025-10-19T16:32:31.542952328Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
Oct 19 16:32:32 functional-761710 containerd[3839]: time="2025-10-19T16:32:32.149442280Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
Oct 19 16:32:33 functional-761710 containerd[3839]: time="2025-10-19T16:32:33.797489879Z" level=error msg="PullImage \"docker.io/mysql:5.7\" failed" error="failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Oct 19 16:32:33 functional-761710 containerd[3839]: time="2025-10-19T16:32:33.797546775Z" level=info msg="stop pulling image docker.io/library/mysql:5.7: active requests=0, bytes read=10967"
Oct 19 16:32:33 functional-761710 containerd[3839]: time="2025-10-19T16:32:33.798397856Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
Oct 19 16:32:33 functional-761710 containerd[3839]: time="2025-10-19T16:32:33.800009481Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
Oct 19 16:32:34 functional-761710 containerd[3839]: time="2025-10-19T16:32:34.386132541Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
Oct 19 16:32:36 functional-761710 containerd[3839]: time="2025-10-19T16:32:36.031371027Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Oct 19 16:32:36 functional-761710 containerd[3839]: time="2025-10-19T16:32:36.031456486Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11047"
==> coredns [a4020f1f6d2fca615331c9843c7c8dc741b34fca8cbb3cce01f2ccad93bb295a] <==
maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
.:53
[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
CoreDNS-1.12.1
linux/amd64, go1.24.1, 707c7c1
[INFO] 127.0.0.1:54651 - 6144 "HINFO IN 3449147369874917196.2101408998229899954. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.156774922s
==> coredns [db0ccb1d66c53de96180d5bd61b1a535ac4175f9b223e1d6d2c252489fd79e1a] <==
maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
.:53
[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
CoreDNS-1.12.1
linux/amd64, go1.24.1, 707c7c1
[INFO] 127.0.0.1:39851 - 7886 "HINFO IN 1345558034441184947.6162443656980074187. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.022754018s
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
==> describe nodes <==
Name: functional-761710
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=functional-761710
kubernetes.io/os=linux
minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34
minikube.k8s.io/name=functional-761710
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_10_19T16_27_41_0700
minikube.k8s.io/version=v1.37.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sun, 19 Oct 2025 16:27:37 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: functional-761710
AcquireTime: <unset>
RenewTime: Sun, 19 Oct 2025 16:34:19 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Sun, 19 Oct 2025 16:33:28 +0000 Sun, 19 Oct 2025 16:27:36 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sun, 19 Oct 2025 16:33:28 +0000 Sun, 19 Oct 2025 16:27:36 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sun, 19 Oct 2025 16:33:28 +0000 Sun, 19 Oct 2025 16:27:36 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sun, 19 Oct 2025 16:33:28 +0000 Sun, 19 Oct 2025 16:27:57 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.49.2
Hostname: functional-761710
Capacity:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32863456Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32863456Ki
pods: 110
System Info:
Machine ID: d003bb31a145a6c010d7ddda68f0c68d
System UUID: 5ae68d56-98af-4144-b34d-e9f4fe2ba653
Boot ID: 6b9d3a6f-b4ab-4fcc-81f2-3c26fae1271b
Kernel Version: 6.8.0-1041-gcp
OS Image: Debian GNU/Linux 12 (bookworm)
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.7.28
Kubelet Version: v1.34.1
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (15 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default hello-node-75c85bcc94-5nt4t 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m15s
default hello-node-connect-7d85dfc575-4w6jk 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m15s
default mysql-5bb876957f-f6zqq 600m (7%) 700m (8%) 512Mi (1%) 700Mi (2%) 4m56s
default nginx-svc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m15s
default sp-pod 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m57s
kube-system coredns-66bc5c9577-mcw9m 100m (1%) 0 (0%) 70Mi (0%) 170Mi (0%) 6m38s
kube-system etcd-functional-761710 100m (1%) 0 (0%) 100Mi (0%) 0 (0%) 6m44s
kube-system kindnet-l9dts 100m (1%) 100m (1%) 50Mi (0%) 50Mi (0%) 6m39s
kube-system kube-apiserver-functional-761710 250m (3%) 0 (0%) 0 (0%) 0 (0%) 5m40s
kube-system kube-controller-manager-functional-761710 200m (2%) 0 (0%) 0 (0%) 0 (0%) 6m44s
kube-system kube-proxy-ffw5j 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m39s
kube-system kube-scheduler-functional-761710 100m (1%) 0 (0%) 0 (0%) 0 (0%) 6m44s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m38s
kubernetes-dashboard dashboard-metrics-scraper-77bf4d6c4c-5tf7v 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m
kubernetes-dashboard kubernetes-dashboard-855c9754f9-7vhtt 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 1450m (18%) 800m (10%)
memory 732Mi (2%) 920Mi (2%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 6m37s kube-proxy
Normal Starting 6m kube-proxy
Normal NodeAllocatableEnforced 6m44s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 6m44s kubelet Node functional-761710 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 6m44s kubelet Node functional-761710 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 6m44s kubelet Node functional-761710 status is now: NodeHasSufficientPID
Normal Starting 6m44s kubelet Starting kubelet.
Normal RegisteredNode 6m40s node-controller Node functional-761710 event: Registered Node functional-761710 in Controller
Normal NodeReady 6m27s kubelet Node functional-761710 status is now: NodeReady
Normal Starting 5m43s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 5m43s (x8 over 5m43s) kubelet Node functional-761710 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 5m43s (x8 over 5m43s) kubelet Node functional-761710 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 5m43s (x7 over 5m43s) kubelet Node functional-761710 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 5m43s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 5m38s node-controller Node functional-761710 event: Registered Node functional-761710 in Controller
==> dmesg <==
[Oct19 16:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
[ +0.000998] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
[ +0.002002] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
[ +0.086015] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
[ +0.395420] i8042: Warning: Keylock active
[ +0.009777] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
[ +0.004107] platform eisa.0: EISA: Cannot allocate resource for mainboard
[ +0.000660] platform eisa.0: Cannot allocate resource for EISA slot 1
[ +0.000671] platform eisa.0: Cannot allocate resource for EISA slot 2
[ +0.000664] platform eisa.0: Cannot allocate resource for EISA slot 3
[ +0.000659] platform eisa.0: Cannot allocate resource for EISA slot 4
[ +0.000749] platform eisa.0: Cannot allocate resource for EISA slot 5
[ +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 6
[ +0.000673] platform eisa.0: Cannot allocate resource for EISA slot 7
[ +0.000734] platform eisa.0: Cannot allocate resource for EISA slot 8
[ +0.486149] block sda: the capability attribute has been deprecated.
[ +0.085903] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
[ +0.022480] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
[ +5.261018] kauditd_printk_skb: 47 callbacks suppressed
==> etcd [1b3ed669750ce399f206e2f340f450d87681263adb0c18cb6b8771fbe062e569] <==
{"level":"warn","ts":"2025-10-19T16:27:37.400537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51748","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-19T16:27:37.406675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51766","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-19T16:27:37.413453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51774","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-19T16:27:37.428017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51792","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-19T16:27:37.433969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51804","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-19T16:27:37.439718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51818","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-19T16:27:37.482868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51828","server-name":"","error":"EOF"}
{"level":"info","ts":"2025-10-19T16:28:22.817527Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2025-10-19T16:28:22.817593Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-761710","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
{"level":"error","ts":"2025-10-19T16:28:22.817686Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
{"level":"error","ts":"2025-10-19T16:28:29.819533Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
{"level":"error","ts":"2025-10-19T16:28:29.819634Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"info","ts":"2025-10-19T16:28:29.819690Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
{"level":"warn","ts":"2025-10-19T16:28:29.819710Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"warn","ts":"2025-10-19T16:28:29.819733Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"warn","ts":"2025-10-19T16:28:29.819731Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
{"level":"warn","ts":"2025-10-19T16:28:29.819745Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
{"level":"error","ts":"2025-10-19T16:28:29.819743Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"info","ts":"2025-10-19T16:28:29.819745Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
{"level":"info","ts":"2025-10-19T16:28:29.819766Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
{"level":"error","ts":"2025-10-19T16:28:29.819754Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"info","ts":"2025-10-19T16:28:29.821877Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
{"level":"error","ts":"2025-10-19T16:28:29.821934Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"info","ts":"2025-10-19T16:28:29.821969Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2025-10-19T16:28:29.821983Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-761710","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
==> etcd [e85bee2c96edbced28d97ebccf45eba0479dc4e989d2c13c4223e8cff0e1383c] <==
{"level":"warn","ts":"2025-10-19T16:28:42.971392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38730","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-19T16:28:42.978095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38748","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-19T16:28:42.989711Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38764","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-19T16:28:42.995697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38774","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-19T16:28:43.003066Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38790","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-19T16:28:43.009203Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38814","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-19T16:28:43.015096Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38832","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-19T16:28:43.022319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38848","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-19T16:28:43.028492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38874","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-19T16:28:43.035353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38890","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-19T16:28:43.042513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38920","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-19T16:28:43.049036Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38930","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-19T16:28:43.063812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38964","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-19T16:28:43.070962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38990","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-19T16:28:43.077216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39008","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-19T16:28:43.084149Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39024","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-19T16:28:43.090676Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39044","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-19T16:28:43.097239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39060","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-19T16:28:43.103665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39080","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-19T16:28:43.110695Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39090","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-19T16:28:43.116652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39112","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-19T16:28:43.127526Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39140","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-19T16:28:43.133648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39164","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-19T16:28:43.139968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39172","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-19T16:28:43.188849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39190","server-name":"","error":"EOF"}
==> kernel <==
16:34:24 up 16 min, 0 user, load average: 0.18, 0.40, 0.42
Linux functional-761710 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
==> kindnet [1cda6a7dc16b1ab1780421ca338150ff0caab7ffbf29c2e3eac63886eaea63b5] <==
I1019 16:27:46.938998 1 main.go:109] connected to apiserver: https://10.96.0.1:443
I1019 16:27:47.023001 1 main.go:139] hostIP = 192.168.49.2
podIP = 192.168.49.2
I1019 16:27:47.023197 1 main.go:148] setting mtu 1500 for CNI
I1019 16:27:47.023213 1 main.go:178] kindnetd IP family: "ipv4"
I1019 16:27:47.023235 1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
time="2025-10-19T16:27:47Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
I1019 16:27:47.234099 1 controller.go:377] "Starting controller" name="kube-network-policies"
I1019 16:27:47.234191 1 controller.go:381] "Waiting for informer caches to sync"
I1019 16:27:47.234214 1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
I1019 16:27:47.234493 1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
I1019 16:27:47.534453 1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
I1019 16:27:47.534483 1 metrics.go:72] Registering metrics
I1019 16:27:47.534521 1 controller.go:711] "Syncing nftables rules"
I1019 16:27:57.226737 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1019 16:27:57.226826 1 main.go:301] handling current node
I1019 16:28:07.232918 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1019 16:28:07.232952 1 main.go:301] handling current node
I1019 16:28:17.229134 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1019 16:28:17.229185 1 main.go:301] handling current node
==> kindnet [f34e350aeb77902eca5e271db9df5c60cd64a19bb27366157050e71391704a75] <==
I1019 16:32:24.070936 1 main.go:301] handling current node
I1019 16:32:34.077024 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1019 16:32:34.077083 1 main.go:301] handling current node
I1019 16:32:44.070140 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1019 16:32:44.070190 1 main.go:301] handling current node
I1019 16:32:54.071489 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1019 16:32:54.071527 1 main.go:301] handling current node
I1019 16:33:04.069666 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1019 16:33:04.069707 1 main.go:301] handling current node
I1019 16:33:14.070193 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1019 16:33:14.070231 1 main.go:301] handling current node
I1019 16:33:24.071902 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1019 16:33:24.071932 1 main.go:301] handling current node
I1019 16:33:34.073651 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1019 16:33:34.073690 1 main.go:301] handling current node
I1019 16:33:44.070113 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1019 16:33:44.070148 1 main.go:301] handling current node
I1019 16:33:54.072337 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1019 16:33:54.072369 1 main.go:301] handling current node
I1019 16:34:04.073108 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1019 16:34:04.073143 1 main.go:301] handling current node
I1019 16:34:14.070179 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1019 16:34:14.070222 1 main.go:301] handling current node
I1019 16:34:24.071177 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1019 16:34:24.071212 1 main.go:301] handling current node
==> kube-apiserver [08973f3b34332566862f52063d35ff4b6a30b82b42d4a971953fe02a3bcf1241] <==
I1019 16:28:43.633635 1 cache.go:32] Waiting for caches to sync for autoregister controller
I1019 16:28:43.633642 1 cache.go:39] Caches are synced for autoregister controller
I1019 16:28:43.637782 1 handler_discovery.go:451] Starting ResourceDiscoveryManager
I1019 16:28:43.663575 1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
I1019 16:28:44.535846 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I1019 16:28:44.671230 1 controller.go:667] quota admission added evaluator for: serviceaccounts
I1019 16:28:44.671230 1 controller.go:667] quota admission added evaluator for: serviceaccounts
W1019 16:28:44.840440 1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
I1019 16:28:44.841660 1 controller.go:667] quota admission added evaluator for: endpoints
I1019 16:28:44.846194 1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
I1019 16:28:45.393664 1 controller.go:667] quota admission added evaluator for: daemonsets.apps
I1019 16:28:45.487184 1 controller.go:667] quota admission added evaluator for: deployments.apps
I1019 16:28:45.534802 1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1019 16:28:45.540187 1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I1019 16:28:47.927207 1 controller.go:667] quota admission added evaluator for: replicasets.apps
I1019 16:29:04.625735 1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.102.69.252"}
I1019 16:29:09.382602 1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.108.29.20"}
I1019 16:29:09.443972 1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.109.192.131"}
I1019 16:29:09.455438 1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.103.224.244"}
I1019 16:29:24.142868 1 controller.go:667] quota admission added evaluator for: namespaces
I1019 16:29:24.255363 1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.88.97"}
I1019 16:29:24.268479 1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.219.99"}
E1019 16:29:26.569540 1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:39088: use of closed network connection
I1019 16:29:28.306776 1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.105.126.77"}
E1019 16:29:36.968094 1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:49670: use of closed network connection
==> kube-controller-manager [4973497bd1c429e784b628f248d777abe37538657f8c8e3fbcc1aa1f963be117] <==
I1019 16:28:46.963225 1 shared_informer.go:356] "Caches are synced" controller="GC"
I1019 16:28:46.963360 1 shared_informer.go:356] "Caches are synced" controller="service account"
I1019 16:28:46.964377 1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
I1019 16:28:46.964406 1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
I1019 16:28:46.965145 1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
I1019 16:28:46.965215 1 shared_informer.go:356] "Caches are synced" controller="endpoint"
I1019 16:28:46.967650 1 shared_informer.go:356] "Caches are synced" controller="TTL"
I1019 16:28:46.969815 1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
I1019 16:28:46.970164 1 shared_informer.go:356] "Caches are synced" controller="resource quota"
I1019 16:28:46.971240 1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
I1019 16:28:46.973484 1 shared_informer.go:356] "Caches are synced" controller="deployment"
I1019 16:28:46.974703 1 shared_informer.go:356] "Caches are synced" controller="cronjob"
I1019 16:28:46.976896 1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
I1019 16:28:46.979370 1 shared_informer.go:356] "Caches are synced" controller="taint"
I1019 16:28:46.979449 1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
I1019 16:28:46.979533 1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-761710"
I1019 16:28:46.979602 1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
I1019 16:28:46.981556 1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
I1019 16:28:46.988910 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
E1019 16:29:24.190944 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1019 16:29:24.195610 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1019 16:29:24.197978 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1019 16:29:24.199685 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1019 16:29:24.204090 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1019 16:29:24.207392 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
==> kube-controller-manager [c4a059002b214bc47299729cff5112eae776a7586a794ec2aa1f122731cd2ccc] <==
I1019 16:27:44.878349 1 shared_informer.go:356] "Caches are synced" controller="PV protection"
I1019 16:27:44.878446 1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
I1019 16:27:44.878473 1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
I1019 16:27:44.878496 1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
I1019 16:27:44.878548 1 shared_informer.go:356] "Caches are synced" controller="GC"
I1019 16:27:44.878693 1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
I1019 16:27:44.878743 1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
I1019 16:27:44.878911 1 shared_informer.go:356] "Caches are synced" controller="TTL"
I1019 16:27:44.878923 1 shared_informer.go:356] "Caches are synced" controller="cronjob"
I1019 16:27:44.879312 1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
I1019 16:27:44.879344 1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
I1019 16:27:44.879385 1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
I1019 16:27:44.879393 1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
I1019 16:27:44.882030 1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
I1019 16:27:44.883652 1 shared_informer.go:356] "Caches are synced" controller="resource quota"
I1019 16:27:44.883773 1 shared_informer.go:356] "Caches are synced" controller="resource quota"
I1019 16:27:44.883807 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
I1019 16:27:44.883816 1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
I1019 16:27:44.883823 1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
I1019 16:27:44.888799 1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
I1019 16:27:44.895191 1 shared_informer.go:356] "Caches are synced" controller="job"
I1019 16:27:44.898493 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
I1019 16:27:44.903639 1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
I1019 16:27:44.912959 1 shared_informer.go:356] "Caches are synced" controller="HPA"
I1019 16:27:59.880398 1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
==> kube-proxy [7c27f34b9f2fd5e95e689810fa8cf4a8aad2065d1b8cba2921e43bcecfd12157] <==
I1019 16:28:23.733826 1 server_linux.go:53] "Using iptables proxy"
I1019 16:28:23.795848 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I1019 16:28:23.896002 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1019 16:28:23.896075 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
E1019 16:28:23.896151 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1019 16:28:23.917653 1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I1019 16:28:23.917700 1 server_linux.go:132] "Using iptables Proxier"
I1019 16:28:23.923168 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1019 16:28:23.923898 1 server.go:527] "Version info" version="v1.34.1"
I1019 16:28:23.923932 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1019 16:28:23.925726 1 config.go:200] "Starting service config controller"
I1019 16:28:23.925747 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1019 16:28:23.925787 1 config.go:106] "Starting endpoint slice config controller"
I1019 16:28:23.925797 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1019 16:28:23.925801 1 config.go:403] "Starting serviceCIDR config controller"
I1019 16:28:23.925808 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1019 16:28:23.925819 1 config.go:309] "Starting node config controller"
I1019 16:28:23.925825 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1019 16:28:23.925831 1 shared_informer.go:356] "Caches are synced" controller="node config"
I1019 16:28:24.026703 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
I1019 16:28:24.026831 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
I1019 16:28:24.026891 1 shared_informer.go:356] "Caches are synced" controller="service config"
==> kube-proxy [7c91085af0ff846a5f5570f5cd5939b894aec16fc914b191677051465657777a] <==
I1019 16:27:46.486097 1 server_linux.go:53] "Using iptables proxy"
I1019 16:27:46.544808 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I1019 16:27:46.645849 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1019 16:27:46.645904 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
E1019 16:27:46.646005 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1019 16:27:46.669329 1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I1019 16:27:46.669389 1 server_linux.go:132] "Using iptables Proxier"
I1019 16:27:46.675140 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1019 16:27:46.675732 1 server.go:527] "Version info" version="v1.34.1"
I1019 16:27:46.675769 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1019 16:27:46.677658 1 config.go:200] "Starting service config controller"
I1019 16:27:46.677676 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1019 16:27:46.677736 1 config.go:403] "Starting serviceCIDR config controller"
I1019 16:27:46.677749 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1019 16:27:46.677771 1 config.go:106] "Starting endpoint slice config controller"
I1019 16:27:46.677778 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1019 16:27:46.678690 1 config.go:309] "Starting node config controller"
I1019 16:27:46.678775 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1019 16:27:46.678802 1 shared_informer.go:356] "Caches are synced" controller="node config"
I1019 16:27:46.777849 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
I1019 16:27:46.777894 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
I1019 16:27:46.777895 1 shared_informer.go:356] "Caches are synced" controller="service config"
==> kube-scheduler [1462b09dbe799076213d74cc43d920ba9338c4f6897c8e0f1cee85968c4316ca] <==
I1019 16:28:42.304977 1 serving.go:386] Generated self-signed cert in-memory
W1019 16:28:43.569376 1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W1019 16:28:43.569414 1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found]
W1019 16:28:43.569428 1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
W1019 16:28:43.569437 1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I1019 16:28:43.583920 1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
I1019 16:28:43.583943 1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1019 16:28:43.585791 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1019 16:28:43.585829 1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1019 16:28:43.586154 1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
I1019 16:28:43.586213 1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
I1019 16:28:43.686218 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
==> kube-scheduler [d7d06b0ece08fd1d83ee5dcb0407dd361d8fe3062ee13eae257debf5ee09797a] <==
E1019 16:27:37.889462 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E1019 16:27:37.889469 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
E1019 16:27:37.889491 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1019 16:27:37.889534 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1019 16:27:37.889177 1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
E1019 16:27:37.889653 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1019 16:27:37.889756 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1019 16:27:37.889833 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
E1019 16:27:38.706446 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1019 16:27:38.713829 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1019 16:27:38.748178 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
E1019 16:27:38.857638 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
E1019 16:27:38.859585 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
E1019 16:27:38.889282 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1019 16:27:38.939734 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1019 16:27:39.104564 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1019 16:27:39.121875 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
E1019 16:27:39.152953 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
I1019 16:27:41.886024 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1019 16:28:40.017075 1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
I1019 16:28:40.017093 1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1019 16:28:40.017220 1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
I1019 16:28:40.017248 1 server.go:263] "[graceful-termination] secure server has stopped listening"
I1019 16:28:40.017281 1 server.go:265] "[graceful-termination] secure server is exiting"
E1019 16:28:40.017307 1 run.go:72] "command failed" err="finished without leader elect"
==> kubelet <==
Oct 19 16:32:41 functional-761710 kubelet[4897]: E1019 16:32:41.544395 4897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7vhtt" podUID="5e59f625-102a-4bda-ab22-74f294540da8"
Oct 19 16:32:47 functional-761710 kubelet[4897]: E1019 16:32:47.543835 4897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-f6zqq" podUID="a065c342-53de-4ec8-ad51-73316b849dc7"
Oct 19 16:32:48 functional-761710 kubelet[4897]: E1019 16:32:48.540303 4897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-5tf7v" podUID="20c2626e-8f7b-4765-afb8-e58d
1b547085"
Oct 19 16:32:54 functional-761710 kubelet[4897]: E1019 16:32:54.540713 4897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7vhtt" podUID="5e59f625-102a-4bda-ab22-74f294540da8"
Oct 19 16:32:58 functional-761710 kubelet[4897]: E1019 16:32:58.540742 4897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-f6zqq" podUID="a065c342-53de-4ec8-ad51-73316b849dc7"
Oct 19 16:33:02 functional-761710 kubelet[4897]: E1019 16:33:02.540603 4897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-5tf7v" podUID="20c2626e-8f7b-4765-afb8-e58d
1b547085"
Oct 19 16:33:05 functional-761710 kubelet[4897]: E1019 16:33:05.541116 4897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7vhtt" podUID="5e59f625-102a-4bda-ab22-74f294540da8"
Oct 19 16:33:12 functional-761710 kubelet[4897]: E1019 16:33:12.541133 4897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-f6zqq" podUID="a065c342-53de-4ec8-ad51-73316b849dc7"
Oct 19 16:33:17 functional-761710 kubelet[4897]: E1019 16:33:17.540946 4897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-5tf7v" podUID="20c2626e-8f7b-4765-afb8-e58d
1b547085"
Oct 19 16:33:20 functional-761710 kubelet[4897]: E1019 16:33:20.540531 4897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7vhtt" podUID="5e59f625-102a-4bda-ab22-74f294540da8"
Oct 19 16:33:26 functional-761710 kubelet[4897]: E1019 16:33:26.540821 4897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-f6zqq" podUID="a065c342-53de-4ec8-ad51-73316b849dc7"
Oct 19 16:33:29 functional-761710 kubelet[4897]: E1019 16:33:29.541010 4897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-5tf7v" podUID="20c2626e-8f7b-4765-afb8-e58d
1b547085"
Oct 19 16:33:32 functional-761710 kubelet[4897]: E1019 16:33:32.540659 4897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7vhtt" podUID="5e59f625-102a-4bda-ab22-74f294540da8"
Oct 19 16:33:41 functional-761710 kubelet[4897]: E1019 16:33:41.543948 4897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-f6zqq" podUID="a065c342-53de-4ec8-ad51-73316b849dc7"
Oct 19 16:33:41 functional-761710 kubelet[4897]: E1019 16:33:41.543946 4897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-5tf7v" podUID="20c2626e-8f7b-4765-afb8-e58d
1b547085"
Oct 19 16:33:46 functional-761710 kubelet[4897]: E1019 16:33:46.541008 4897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7vhtt" podUID="5e59f625-102a-4bda-ab22-74f294540da8"
Oct 19 16:33:52 functional-761710 kubelet[4897]: E1019 16:33:52.540632 4897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-f6zqq" podUID="a065c342-53de-4ec8-ad51-73316b849dc7"
Oct 19 16:33:56 functional-761710 kubelet[4897]: E1019 16:33:56.541101 4897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-5tf7v" podUID="20c2626e-8f7b-4765-afb8-e58d
1b547085"
Oct 19 16:33:58 functional-761710 kubelet[4897]: E1019 16:33:58.540930 4897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7vhtt" podUID="5e59f625-102a-4bda-ab22-74f294540da8"
Oct 19 16:34:03 functional-761710 kubelet[4897]: E1019 16:34:03.540601 4897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-f6zqq" podUID="a065c342-53de-4ec8-ad51-73316b849dc7"
Oct 19 16:34:09 functional-761710 kubelet[4897]: E1019 16:34:09.546279 4897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-5tf7v" podUID="20c2626e-8f7b-4765-afb8-e58d
1b547085"
Oct 19 16:34:12 functional-761710 kubelet[4897]: E1019 16:34:12.541226 4897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7vhtt" podUID="5e59f625-102a-4bda-ab22-74f294540da8"
Oct 19 16:34:18 functional-761710 kubelet[4897]: E1019 16:34:18.544083 4897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-f6zqq" podUID="a065c342-53de-4ec8-ad51-73316b849dc7"
Oct 19 16:34:23 functional-761710 kubelet[4897]: E1019 16:34:23.540344 4897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7vhtt" podUID="5e59f625-102a-4bda-ab22-74f294540da8"
Oct 19 16:34:24 functional-761710 kubelet[4897]: E1019 16:34:24.540546 4897 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-5tf7v" podUID="20c2626e-8f7b-4765-afb8-e58d
1b547085"
==> storage-provisioner [0a88ac387625bcc8f641d8fcfea039499913787ee17be78376e83cef579b85f3] <==
I1019 16:27:57.899436 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-761710_683cce3d-761d-4c11-94e7-16840eda5b37!
W1019 16:27:59.807950 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1019 16:27:59.811782 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1019 16:28:01.814709 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1019 16:28:01.818903 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1019 16:28:03.822526 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1019 16:28:03.827155 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1019 16:28:05.830416 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1019 16:28:05.836262 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1019 16:28:07.840278 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1019 16:28:07.844784 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1019 16:28:09.847502 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1019 16:28:09.853003 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1019 16:28:11.856149 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1019 16:28:11.861414 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1019 16:28:13.864430 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1019 16:28:13.870723 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1019 16:28:15.874276 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1019 16:28:15.878004 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1019 16:28:17.881509 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1019 16:28:17.885238 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1019 16:28:19.888227 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1019 16:28:19.892081 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1019 16:28:21.895540 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1019 16:28:21.899481 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
==> storage-provisioner [c968c41169b995e8f0e0af55ce448bee3ba2bf9ced4d72654dbc8b7c1eaf0ba4] <==
W1019 16:33:59.044796 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1019 16:34:01.047612 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1019 16:34:01.052754 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1019 16:34:03.055655 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1019 16:34:03.059903 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1019 16:34:05.063149 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1019 16:34:05.068214 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1019 16:34:07.071550 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1019 16:34:07.075874 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1019 16:34:09.078836 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1019 16:34:09.083376 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1019 16:34:11.086076 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1019 16:34:11.091023 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1019 16:34:13.093955 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1019 16:34:13.098037 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1019 16:34:15.101568 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1019 16:34:15.107126 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1019 16:34:17.110194 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1019 16:34:17.114350 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1019 16:34:19.117460 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1019 16:34:19.121763 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1019 16:34:21.125294 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1019 16:34:21.129071 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1019 16:34:23.132727 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1019 16:34:23.137561 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
-- /stdout --
helpers_test.go:262: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-761710 -n functional-761710
helpers_test.go:269: (dbg) Run: kubectl --context functional-761710 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount mysql-5bb876957f-f6zqq dashboard-metrics-scraper-77bf4d6c4c-5tf7v kubernetes-dashboard-855c9754f9-7vhtt
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run: kubectl --context functional-761710 describe pod busybox-mount mysql-5bb876957f-f6zqq dashboard-metrics-scraper-77bf4d6c4c-5tf7v kubernetes-dashboard-855c9754f9-7vhtt
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-761710 describe pod busybox-mount mysql-5bb876957f-f6zqq dashboard-metrics-scraper-77bf4d6c4c-5tf7v kubernetes-dashboard-855c9754f9-7vhtt: exit status 1 (74.431499ms)
-- stdout --
Name: busybox-mount
Namespace: default
Priority: 0
Service Account: default
Node: functional-761710/192.168.49.2
Start Time: Sun, 19 Oct 2025 16:29:23 +0000
Labels: integration-test=busybox-mount
Annotations: <none>
Status: Succeeded
IP: 10.244.0.8
IPs:
IP: 10.244.0.8
Containers:
mount-munger:
Container ID: containerd://f1a44b7013fc522b10faa629c43c6fad14dc3052a079440850320321c563ef05
Image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID: gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
Port: <none>
Host Port: <none>
Command:
/bin/sh
-c
--
Args:
cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
State: Terminated
Reason: Completed
Exit Code: 0
Started: Sun, 19 Oct 2025 16:29:26 +0000
Finished: Sun, 19 Oct 2025 16:29:26 +0000
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/mount-9p from test-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8wtmj (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
test-volume:
Type: HostPath (bare host directory volume)
Path: /mount-9p
HostPathType:
kube-api-access-8wtmj:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m1s default-scheduler Successfully assigned default/busybox-mount to functional-761710
Normal Pulling 5m2s kubelet Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Normal Pulled 4m59s kubelet Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.135s (2.135s including waiting). Image size: 2395207 bytes.
Normal Created 4m59s kubelet Created container: mount-munger
Normal Started 4m59s kubelet Started container mount-munger
Name: mysql-5bb876957f-f6zqq
Namespace: default
Priority: 0
Service Account: default
Node: functional-761710/192.168.49.2
Start Time: Sun, 19 Oct 2025 16:29:28 +0000
Labels: app=mysql
pod-template-hash=5bb876957f
Annotations: <none>
Status: Pending
IP: 10.244.0.12
IPs:
IP: 10.244.0.12
Controlled By: ReplicaSet/mysql-5bb876957f
Containers:
mysql:
Container ID:
Image: docker.io/mysql:5.7
Image ID:
Port: 3306/TCP (mysql)
Host Port: 0/TCP (mysql)
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Limits:
cpu: 700m
memory: 700Mi
Requests:
cpu: 600m
memory: 512Mi
Environment:
MYSQL_ROOT_PASSWORD: password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5fdst (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-5fdst:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4m57s default-scheduler Successfully assigned default/mysql-5bb876957f-f6zqq to functional-761710
Normal Pulling 114s (x5 over 4m57s) kubelet Pulling image "docker.io/mysql:5.7"
Warning Failed 112s (x5 over 4m52s) kubelet Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning Failed 112s (x5 over 4m52s) kubelet Error: ErrImagePull
Warning Failed 33s (x15 over 4m51s) kubelet Error: ImagePullBackOff
Normal BackOff 7s (x17 over 4m51s) kubelet Back-off pulling image "docker.io/mysql:5.7"
-- /stdout --
** stderr **
Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-5tf7v" not found
Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-7vhtt" not found
** /stderr **
helpers_test.go:287: kubectl --context functional-761710 describe pod busybox-mount mysql-5bb876957f-f6zqq dashboard-metrics-scraper-77bf4d6c4c-5tf7v kubernetes-dashboard-855c9754f9-7vhtt: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (302.16s)