=== RUN TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd
=== CONT TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-199910 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-199910 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-199910 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-199910 --alsologtostderr -v=1] stderr:
I1002 06:19:15.450207 429962 out.go:360] Setting OutFile to fd 1 ...
I1002 06:19:15.450448 429962 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 06:19:15.450458 429962 out.go:374] Setting ErrFile to fd 2...
I1002 06:19:15.450462 429962 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 06:19:15.450654 429962 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-375701/.minikube/bin
I1002 06:19:15.450950 429962 mustload.go:65] Loading cluster: functional-199910
I1002 06:19:15.451298 429962 config.go:182] Loaded profile config "functional-199910": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1002 06:19:15.451656 429962 cli_runner.go:164] Run: docker container inspect functional-199910 --format={{.State.Status}}
I1002 06:19:15.469123 429962 host.go:66] Checking if "functional-199910" exists ...
I1002 06:19:15.469371 429962 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1002 06:19:15.523116 429962 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 06:19:15.513261468 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1002 06:19:15.523259 429962 api_server.go:166] Checking apiserver status ...
I1002 06:19:15.523322 429962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1002 06:19:15.523373 429962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-199910
I1002 06:19:15.541031 429962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/21643-375701/.minikube/machines/functional-199910/id_rsa Username:docker}
I1002 06:19:15.645409 429962 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5165/cgroup
W1002 06:19:15.653066 429962 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5165/cgroup: Process exited with status 1
stdout:
stderr:
I1002 06:19:15.653129 429962 ssh_runner.go:195] Run: ls
I1002 06:19:15.656499 429962 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I1002 06:19:15.661410 429962 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W1002 06:19:15.661450 429962 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1002 06:19:15.661597 429962 config.go:182] Loaded profile config "functional-199910": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1002 06:19:15.661607 429962 addons.go:69] Setting dashboard=true in profile "functional-199910"
I1002 06:19:15.661614 429962 addons.go:238] Setting addon dashboard=true in "functional-199910"
I1002 06:19:15.661636 429962 host.go:66] Checking if "functional-199910" exists ...
I1002 06:19:15.662037 429962 cli_runner.go:164] Run: docker container inspect functional-199910 --format={{.State.Status}}
I1002 06:19:15.680443 429962 out.go:179] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1002 06:19:15.681553 429962 out.go:179] - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1002 06:19:15.682403 429962 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1002 06:19:15.682418 429962 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1002 06:19:15.682466 429962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-199910
I1002 06:19:15.698455 429962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/21643-375701/.minikube/machines/functional-199910/id_rsa Username:docker}
I1002 06:19:15.802277 429962 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1002 06:19:15.802300 429962 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1002 06:19:15.815119 429962 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1002 06:19:15.815138 429962 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1002 06:19:15.826618 429962 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1002 06:19:15.826636 429962 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1002 06:19:15.838594 429962 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1002 06:19:15.838613 429962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1002 06:19:15.850339 429962 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I1002 06:19:15.850356 429962 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1002 06:19:15.862313 429962 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1002 06:19:15.862332 429962 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1002 06:19:15.873961 429962 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1002 06:19:15.873981 429962 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1002 06:19:15.886281 429962 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1002 06:19:15.886298 429962 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1002 06:19:15.898021 429962 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1002 06:19:15.898038 429962 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1002 06:19:15.910079 429962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1002 06:19:16.305455 429962 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p functional-199910 addons enable metrics-server
I1002 06:19:16.306557 429962 addons.go:201] Writing out "functional-199910" config to set dashboard=true...
W1002 06:19:16.306991 429962 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1002 06:19:16.307913 429962 kapi.go:59] client config for functional-199910: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21643-375701/.minikube/profiles/functional-199910/client.crt", KeyFile:"/home/jenkins/minikube-integration/21643-375701/.minikube/profiles/functional-199910/client.key", CAFile:"/home/jenkins/minikube-integration/21643-375701/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1002 06:19:16.308426 429962 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1002 06:19:16.308445 429962 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1002 06:19:16.308449 429962 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1002 06:19:16.308454 429962 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1002 06:19:16.308465 429962 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1002 06:19:16.315681 429962 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard kubernetes-dashboard 80a050fd-cfcb-416a-9f55-860f40ed678f 1247 0 2025-10-02 06:19:16 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-10-02 06:19:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.101.174.148,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.101.174.148],IPFamilies:[IPv4],AllocateLoadBalan
cerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1002 06:19:16.315820 429962 out.go:285] * Launching proxy ...
* Launching proxy ...
I1002 06:19:16.315876 429962 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-199910 proxy --port 36195]
I1002 06:19:16.316138 429962 dashboard.go:157] Waiting for kubectl to output host:port ...
I1002 06:19:16.360000 429962 dashboard.go:175] proxy stdout: Starting to serve on 127.0.0.1:36195
W1002 06:19:16.360074 429962 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I1002 06:19:16.367982 429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6262cc89-5d16-4e24-9e3b-55735c7b711a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:16 GMT]] Body:0xc0005e40c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002b6000 TLS:<nil>}
I1002 06:19:16.368059 429962 retry.go:31] will retry after 88.676µs: Temporary Error: unexpected response code: 503
I1002 06:19:16.371392 429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9a25b702-86a7-4e0a-b9ed-1020b3f97e31] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:16 GMT]] Body:0xc0009a72c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000206dc0 TLS:<nil>}
I1002 06:19:16.371453 429962 retry.go:31] will retry after 80.21µs: Temporary Error: unexpected response code: 503
I1002 06:19:16.374577 429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0f892c5b-3a88-49ff-ac34-c6a38c69f441] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:16 GMT]] Body:0xc0005e4240 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002b7e00 TLS:<nil>}
I1002 06:19:16.374620 429962 retry.go:31] will retry after 292.049µs: Temporary Error: unexpected response code: 503
I1002 06:19:16.377540 429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e84e803e-87a9-4446-9deb-b219453d9a1d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:16 GMT]] Body:0xc0009a73c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000206f00 TLS:<nil>}
I1002 06:19:16.377581 429962 retry.go:31] will retry after 443.208µs: Temporary Error: unexpected response code: 503
I1002 06:19:16.380646 429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0839b483-f57f-4acf-a160-8de6408ab2f2] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:16 GMT]] Body:0xc0005e4380 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bf2c0 TLS:<nil>}
I1002 06:19:16.380709 429962 retry.go:31] will retry after 264.305µs: Temporary Error: unexpected response code: 503
I1002 06:19:16.383698 429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0b01bf94-77e2-41d1-b969-a7d28cd284dc] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:16 GMT]] Body:0xc00085c0c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207040 TLS:<nil>}
I1002 06:19:16.383745 429962 retry.go:31] will retry after 609.486µs: Temporary Error: unexpected response code: 503
I1002 06:19:16.386595 429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[db88100e-afd4-4dab-8b6d-43adab3ab9c2] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:16 GMT]] Body:0xc00085c180 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001748000 TLS:<nil>}
I1002 06:19:16.386629 429962 retry.go:31] will retry after 933.507µs: Temporary Error: unexpected response code: 503
I1002 06:19:16.389443 429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[acfe5203-362c-45e1-b82a-d5d08357e7ee] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:16 GMT]] Body:0xc0009a74c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001748140 TLS:<nil>}
I1002 06:19:16.389489 429962 retry.go:31] will retry after 1.743964ms: Temporary Error: unexpected response code: 503
I1002 06:19:16.393315 429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[fbf99f44-a866-4166-9d48-fd7f2b2b369c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:16 GMT]] Body:0xc00085c280 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bf400 TLS:<nil>}
I1002 06:19:16.393348 429962 retry.go:31] will retry after 2.937946ms: Temporary Error: unexpected response code: 503
I1002 06:19:16.398171 429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[82d7842b-532a-444e-86e1-a7c1308dbedd] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:16 GMT]] Body:0xc0005e4540 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001748280 TLS:<nil>}
I1002 06:19:16.398209 429962 retry.go:31] will retry after 3.876728ms: Temporary Error: unexpected response code: 503
I1002 06:19:16.404132 429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0bfdb4db-d82c-488c-95ea-8ab236f4054c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:16 GMT]] Body:0xc00085c380 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207180 TLS:<nil>}
I1002 06:19:16.404165 429962 retry.go:31] will retry after 7.813323ms: Temporary Error: unexpected response code: 503
I1002 06:19:16.414068 429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[dd84c578-f5d5-4ef2-80bd-938f0f2a4214] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:16 GMT]] Body:0xc0009a75c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017483c0 TLS:<nil>}
I1002 06:19:16.414100 429962 retry.go:31] will retry after 8.742475ms: Temporary Error: unexpected response code: 503
I1002 06:19:16.424960 429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[549b24b5-e0a9-41db-b5bf-5c9c4d269a20] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:16 GMT]] Body:0xc0005e4680 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bf540 TLS:<nil>}
I1002 06:19:16.424998 429962 retry.go:31] will retry after 13.582393ms: Temporary Error: unexpected response code: 503
I1002 06:19:16.441317 429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[59385281-0c0e-43af-9ed8-975e923f5663] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:16 GMT]] Body:0xc00085c480 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002072c0 TLS:<nil>}
I1002 06:19:16.441381 429962 retry.go:31] will retry after 13.5332ms: Temporary Error: unexpected response code: 503
I1002 06:19:16.457362 429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a30b0c1d-160e-4d18-b94d-843537cd6f6f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:16 GMT]] Body:0xc00085c540 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001748500 TLS:<nil>}
I1002 06:19:16.457407 429962 retry.go:31] will retry after 24.619545ms: Temporary Error: unexpected response code: 503
I1002 06:19:16.484775 429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[53233ac6-ca8f-44dd-b4bd-33e33c7e164d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:16 GMT]] Body:0xc00085c5c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001748640 TLS:<nil>}
I1002 06:19:16.484827 429962 retry.go:31] will retry after 46.588127ms: Temporary Error: unexpected response code: 503
I1002 06:19:16.534009 429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7db8b8d5-faed-4790-a421-59dfc7c48f09] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:16 GMT]] Body:0xc0005e4880 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001748780 TLS:<nil>}
I1002 06:19:16.534048 429962 retry.go:31] will retry after 65.831904ms: Temporary Error: unexpected response code: 503
I1002 06:19:16.603204 429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a55b26c7-59e9-4309-b130-c955ad38f2f4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:16 GMT]] Body:0xc0009a7700 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207400 TLS:<nil>}
I1002 06:19:16.603268 429962 retry.go:31] will retry after 85.970958ms: Temporary Error: unexpected response code: 503
I1002 06:19:16.693189 429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[fe479821-1511-492f-bb5b-ac594ba24979] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:16 GMT]] Body:0xc0005e49c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bf680 TLS:<nil>}
I1002 06:19:16.693248 429962 retry.go:31] will retry after 139.090478ms: Temporary Error: unexpected response code: 503
I1002 06:19:16.834790 429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7a341ed7-ef09-459f-8819-4a0197ce2e91] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:16 GMT]] Body:0xc00085c800 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207540 TLS:<nil>}
I1002 06:19:16.834849 429962 retry.go:31] will retry after 139.796734ms: Temporary Error: unexpected response code: 503
I1002 06:19:16.977548 429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[37947537-9868-4cac-b9b0-1c29ee115bf0] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:16 GMT]] Body:0xc0009a7780 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017488c0 TLS:<nil>}
I1002 06:19:16.977626 429962 retry.go:31] will retry after 363.743668ms: Temporary Error: unexpected response code: 503
I1002 06:19:17.344639 429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f40fd6f2-4f2c-4896-9fa5-f61d1aeb419d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:17 GMT]] Body:0xc00085dc80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bf7c0 TLS:<nil>}
I1002 06:19:17.344696 429962 retry.go:31] will retry after 741.788917ms: Temporary Error: unexpected response code: 503
I1002 06:19:18.090966 429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7392d6df-9f81-4eac-92d1-874e2f52fdf9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:18 GMT]] Body:0xc0009a7880 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001748a00 TLS:<nil>}
I1002 06:19:18.091037 429962 retry.go:31] will retry after 997.833398ms: Temporary Error: unexpected response code: 503
I1002 06:19:19.091605 429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[54fb561a-b973-4325-94c7-99e80f437e50] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:19 GMT]] Body:0xc0007d80c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bf900 TLS:<nil>}
I1002 06:19:19.091665 429962 retry.go:31] will retry after 932.61279ms: Temporary Error: unexpected response code: 503
I1002 06:19:20.027117 429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[dca262d3-c5ca-47f1-bc28-8ded2c78b8ce] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:20 GMT]] Body:0xc0009a7940 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001748b40 TLS:<nil>}
I1002 06:19:20.027191 429962 retry.go:31] will retry after 1.794435325s: Temporary Error: unexpected response code: 503
I1002 06:19:21.825634 429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[662335d0-93ab-44fb-9f95-d946e69c1eef] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:21 GMT]] Body:0xc0005e4b80 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bfe00 TLS:<nil>}
I1002 06:19:21.825730 429962 retry.go:31] will retry after 1.776278189s: Temporary Error: unexpected response code: 503
I1002 06:19:23.605617 429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0f2c932b-b3dd-4090-87e0-f6ece8759852] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:23 GMT]] Body:0xc0005e4c40 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207680 TLS:<nil>}
I1002 06:19:23.605686 429962 retry.go:31] will retry after 4.916942492s: Temporary Error: unexpected response code: 503
I1002 06:19:28.526451 429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[aabc63ac-1649-4e85-802f-e1d3a60e1c21] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:28 GMT]] Body:0xc0005e4d00 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017c0000 TLS:<nil>}
I1002 06:19:28.526512 429962 retry.go:31] will retry after 5.906031757s: Temporary Error: unexpected response code: 503
I1002 06:19:34.438460 429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[97984b78-4542-40fe-a65e-d0d54d92c530] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:34 GMT]] Body:0xc0007d8240 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207900 TLS:<nil>}
I1002 06:19:34.438525 429962 retry.go:31] will retry after 10.752311502s: Temporary Error: unexpected response code: 503
I1002 06:19:45.197681 429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c90e805b-2852-4289-9764-f1df91e73efe] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:45 GMT]] Body:0xc0009a7a80 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207b80 TLS:<nil>}
I1002 06:19:45.197753 429962 retry.go:31] will retry after 12.873006097s: Temporary Error: unexpected response code: 503
I1002 06:19:58.073334 429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9da17a89-b1af-42a6-baa4-a350a23f5a4f] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:58 GMT]] Body:0xc0009a7b00 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207cc0 TLS:<nil>}
I1002 06:19:58.073405 429962 retry.go:31] will retry after 14.544782249s: Temporary Error: unexpected response code: 503
I1002 06:20:12.621959 429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[49994a32-a939-4fce-9bf0-b138d4359d5b] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:20:12 GMT]] Body:0xc0009a7b80 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001748c80 TLS:<nil>}
I1002 06:20:12.622020 429962 retry.go:31] will retry after 24.734520816s: Temporary Error: unexpected response code: 503
I1002 06:20:37.359795 429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7827eba0-ad1d-4c34-813a-908d40b6d189] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:20:37 GMT]] Body:0xc0007d8400 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017c0140 TLS:<nil>}
I1002 06:20:37.359877 429962 retry.go:31] will retry after 58.784061825s: Temporary Error: unexpected response code: 503
I1002 06:21:36.148057 429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[88d29ef7-2993-480f-853b-8ac11661da7b] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:21:36 GMT]] Body:0xc000892d40 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000206280 TLS:<nil>}
I1002 06:21:36.148134 429962 retry.go:31] will retry after 1m29.114766521s: Temporary Error: unexpected response code: 503
I1002 06:23:05.270261 429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[74559056-0757-4ffe-b5b8-2acd525387a7] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:23:05 GMT]] Body:0xc0005e4200 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000316000 TLS:<nil>}
I1002 06:23:05.270360 429962 retry.go:31] will retry after 1m15.519794491s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run: docker inspect functional-199910
helpers_test.go:243: (dbg) docker inspect functional-199910:
-- stdout --
[
{
"Id": "a129060d7e93b21b338c2aa00a766f151b1698e9fecb1ffe2d82b4bab89daded",
"Created": "2025-10-02T06:11:55.541637226Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 411620,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-10-02T06:11:55.570474432Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
"ResolvConfPath": "/var/lib/docker/containers/a129060d7e93b21b338c2aa00a766f151b1698e9fecb1ffe2d82b4bab89daded/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/a129060d7e93b21b338c2aa00a766f151b1698e9fecb1ffe2d82b4bab89daded/hostname",
"HostsPath": "/var/lib/docker/containers/a129060d7e93b21b338c2aa00a766f151b1698e9fecb1ffe2d82b4bab89daded/hosts",
"LogPath": "/var/lib/docker/containers/a129060d7e93b21b338c2aa00a766f151b1698e9fecb1ffe2d82b4bab89daded/a129060d7e93b21b338c2aa00a766f151b1698e9fecb1ffe2d82b4bab89daded-json.log",
"Name": "/functional-199910",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"functional-199910:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {
"max-size": "100m"
}
},
"NetworkMode": "functional-199910",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "private",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 4294967296,
"NanoCpus": 0,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 8589934592,
"MemorySwappiness": null,
"OomKillDisable": null,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "a129060d7e93b21b338c2aa00a766f151b1698e9fecb1ffe2d82b4bab89daded",
"LowerDir": "/var/lib/docker/overlay2/eb19350948b954d3f3cef2ec88b1b9dcc1bb8c2cbcc956abedd944186369d718-init/diff:/var/lib/docker/overlay2/298df2ba9683a73d350c1b6c983da9f2b87e35cf844050b5b24d44ff0e84e14d/diff",
"MergedDir": "/var/lib/docker/overlay2/eb19350948b954d3f3cef2ec88b1b9dcc1bb8c2cbcc956abedd944186369d718/merged",
"UpperDir": "/var/lib/docker/overlay2/eb19350948b954d3f3cef2ec88b1b9dcc1bb8c2cbcc956abedd944186369d718/diff",
"WorkDir": "/var/lib/docker/overlay2/eb19350948b954d3f3cef2ec88b1b9dcc1bb8c2cbcc956abedd944186369d718/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "functional-199910",
"Source": "/var/lib/docker/volumes/functional-199910/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "functional-199910",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8441/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "functional-199910",
"name.minikube.sigs.k8s.io": "functional-199910",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "f968325eef651f67db13113c73a3310ee76a7c88af5a211cc222343e85ee43d1",
"SandboxKey": "/var/run/docker/netns/f968325eef65",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33159"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33160"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33163"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33161"
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33162"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"functional-199910": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "3a:65:7a:15:4b:c7",
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "7d66feb0971ca31aa50fbd8d10400dca354f44739c3efb8d06e897cb43ffc6b4",
"EndpointID": "50a6d2ecb2dfd2667a6e29fd9b2eea174bfafa2a4794ff1c52dc85ca797a6a00",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"functional-199910",
"a129060d7e93"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p functional-199910 -n functional-199910
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run: out/minikube-linux-amd64 -p functional-199910 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-199910 logs -n 25: (1.15124341s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs:
-- stdout --
==> Audit <==
┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ mount │ -p functional-199910 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1912355386/001:/mount1 --alsologtostderr -v=1 │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │ │
│ mount │ -p functional-199910 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1912355386/001:/mount2 --alsologtostderr -v=1 │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │ │
│ ssh │ functional-199910 ssh findmnt -T /mount1 │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │ 02 Oct 25 06:19 UTC │
│ ssh │ functional-199910 ssh findmnt -T /mount2 │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │ 02 Oct 25 06:19 UTC │
│ ssh │ functional-199910 ssh findmnt -T /mount3 │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │ 02 Oct 25 06:19 UTC │
│ mount │ -p functional-199910 --kill=true │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │ │
│ start │ -p functional-199910 --dry-run --memory 250MB --alsologtostderr --driver=docker --container-runtime=containerd │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │ │
│ start │ -p functional-199910 --dry-run --memory 250MB --alsologtostderr --driver=docker --container-runtime=containerd │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │ │
│ start │ -p functional-199910 --dry-run --alsologtostderr -v=1 --driver=docker --container-runtime=containerd │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │ │
│ dashboard │ --url --port 36195 -p functional-199910 --alsologtostderr -v=1 │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │ │
│ update-context │ functional-199910 update-context --alsologtostderr -v=2 │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │ 02 Oct 25 06:19 UTC │
│ update-context │ functional-199910 update-context --alsologtostderr -v=2 │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │ 02 Oct 25 06:19 UTC │
│ update-context │ functional-199910 update-context --alsologtostderr -v=2 │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │ 02 Oct 25 06:19 UTC │
│ image │ functional-199910 image ls --format short --alsologtostderr │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │ 02 Oct 25 06:19 UTC │
│ image │ functional-199910 image ls --format yaml --alsologtostderr │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │ 02 Oct 25 06:19 UTC │
│ ssh │ functional-199910 ssh pgrep buildkitd │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │ │
│ image │ functional-199910 image build -t localhost/my-image:functional-199910 testdata/build --alsologtostderr │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │ 02 Oct 25 06:19 UTC │
│ image │ functional-199910 image ls │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │ 02 Oct 25 06:19 UTC │
│ image │ functional-199910 image ls --format json --alsologtostderr │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │ 02 Oct 25 06:19 UTC │
│ image │ functional-199910 image ls --format table --alsologtostderr │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │ 02 Oct 25 06:19 UTC │
│ service │ functional-199910 service list │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:23 UTC │ 02 Oct 25 06:23 UTC │
│ service │ functional-199910 service list -o json │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:23 UTC │ 02 Oct 25 06:23 UTC │
│ service │ functional-199910 service --namespace=default --https --url hello-node │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:23 UTC │ │
│ service │ functional-199910 service hello-node --url --format={{.IP}} │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:23 UTC │ │
│ service │ functional-199910 service hello-node --url │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:23 UTC │ │
└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/10/02 06:19:15
Running on machine: ubuntu-20-agent-6
Binary: Built with gc go1.24.6 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1002 06:19:15.254520 429832 out.go:360] Setting OutFile to fd 1 ...
I1002 06:19:15.254770 429832 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 06:19:15.254780 429832 out.go:374] Setting ErrFile to fd 2...
I1002 06:19:15.254792 429832 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 06:19:15.255023 429832 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-375701/.minikube/bin
I1002 06:19:15.255645 429832 out.go:368] Setting JSON to false
I1002 06:19:15.256783 429832 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":7298,"bootTime":1759378657,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1002 06:19:15.256873 429832 start.go:140] virtualization: kvm guest
I1002 06:19:15.258505 429832 out.go:179] * [functional-199910] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
I1002 06:19:15.259591 429832 out.go:179] - MINIKUBE_LOCATION=21643
I1002 06:19:15.259597 429832 notify.go:220] Checking for updates...
I1002 06:19:15.260831 429832 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1002 06:19:15.262162 429832 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/21643-375701/kubeconfig
I1002 06:19:15.263266 429832 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-375701/.minikube
I1002 06:19:15.264267 429832 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I1002 06:19:15.265202 429832 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1002 06:19:15.266577 429832 config.go:182] Loaded profile config "functional-199910": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1002 06:19:15.267099 429832 driver.go:421] Setting default libvirt URI to qemu:///system
I1002 06:19:15.289007 429832 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
I1002 06:19:15.289098 429832 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1002 06:19:15.340141 429832 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 06:19:15.330436749 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1002 06:19:15.340263 429832 docker.go:318] overlay module found
I1002 06:19:15.341738 429832 out.go:179] * Using the docker driver based on existing profile
I1002 06:19:15.342748 429832 start.go:304] selected driver: docker
I1002 06:19:15.342764 429832 start.go:924] validating driver "docker" against &{Name:functional-199910 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-199910 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1002 06:19:15.342901 429832 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1002 06:19:15.343027 429832 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1002 06:19:15.398873 429832 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 06:19:15.389220623 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1002 06:19:15.399597 429832 cni.go:84] Creating CNI manager for ""
I1002 06:19:15.399659 429832 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1002 06:19:15.399708 429832 start.go:348] cluster config:
{Name:functional-199910 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-199910 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1002 06:19:15.401261 429832 out.go:179] * dry-run validation complete!
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
30b85d3947fb7 56cc512116c8f 5 minutes ago Exited mount-munger 0 dfc668eebccd9 busybox-mount default
f6fa8a43c86fe 5107333e08a87 10 minutes ago Running mysql 0 070ec94bcd8f3 mysql-5bb876957f-cvvj2 default
c2503d55a98b1 c3994bc696102 11 minutes ago Running kube-apiserver 0 c790b8963b464 kube-apiserver-functional-199910 kube-system
0826543035037 c80c8dbafe7dd 11 minutes ago Running kube-controller-manager 2 b57c704d4fda4 kube-controller-manager-functional-199910 kube-system
a46012c8d77f4 c80c8dbafe7dd 11 minutes ago Exited kube-controller-manager 1 b57c704d4fda4 kube-controller-manager-functional-199910 kube-system
cf1bb9911e32d 7dd6aaa1717ab 11 minutes ago Running kube-scheduler 1 054fb86bca056 kube-scheduler-functional-199910 kube-system
73ee8e97a7860 5f1f5298c888d 11 minutes ago Running etcd 1 1a30670caae60 etcd-functional-199910 kube-system
fdcf1ac8db1d9 6e38f40d628db 11 minutes ago Running storage-provisioner 1 a5747f1c0a73f storage-provisioner kube-system
f0bafa2f4b2c3 52546a367cc9e 11 minutes ago Running coredns 1 1893e74468bc9 coredns-66bc5c9577-lfbdz kube-system
b42bb88d18439 fc25172553d79 11 minutes ago Running kube-proxy 1 1e56f086695d1 kube-proxy-6fsg9 kube-system
ddd36a4ac2f9f 409467f978b4a 11 minutes ago Running kindnet-cni 1 3b55bb31d621a kindnet-nlvlv kube-system
ff9cd2a0d98dc 52546a367cc9e 11 minutes ago Exited coredns 0 1893e74468bc9 coredns-66bc5c9577-lfbdz kube-system
80952a8c29127 6e38f40d628db 11 minutes ago Exited storage-provisioner 0 a5747f1c0a73f storage-provisioner kube-system
0e6365f8d4553 409467f978b4a 12 minutes ago Exited kindnet-cni 0 3b55bb31d621a kindnet-nlvlv kube-system
97a61b088ac75 fc25172553d79 12 minutes ago Exited kube-proxy 0 1e56f086695d1 kube-proxy-6fsg9 kube-system
08e26663c7e44 5f1f5298c888d 12 minutes ago Exited etcd 0 1a30670caae60 etcd-functional-199910 kube-system
53950e32492bd 7dd6aaa1717ab 12 minutes ago Exited kube-scheduler 0 054fb86bca056 kube-scheduler-functional-199910 kube-system
==> containerd <==
Oct 02 06:20:04 functional-199910 containerd[3842]: time="2025-10-02T06:20:04.884796936Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
Oct 02 06:20:04 functional-199910 containerd[3842]: time="2025-10-02T06:20:04.886503009Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
Oct 02 06:20:05 functional-199910 containerd[3842]: time="2025-10-02T06:20:05.475070954Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
Oct 02 06:20:07 functional-199910 containerd[3842]: time="2025-10-02T06:20:07.106347197Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Oct 02 06:20:07 functional-199910 containerd[3842]: time="2025-10-02T06:20:07.106419885Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11046"
Oct 02 06:20:43 functional-199910 containerd[3842]: time="2025-10-02T06:20:43.884544461Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
Oct 02 06:20:43 functional-199910 containerd[3842]: time="2025-10-02T06:20:43.886118204Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
Oct 02 06:20:44 functional-199910 containerd[3842]: time="2025-10-02T06:20:44.462408812Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
Oct 02 06:20:46 functional-199910 containerd[3842]: time="2025-10-02T06:20:46.454784488Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Oct 02 06:20:46 functional-199910 containerd[3842]: time="2025-10-02T06:20:46.454808932Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=12709"
Oct 02 06:20:56 functional-199910 containerd[3842]: time="2025-10-02T06:20:56.884513245Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
Oct 02 06:20:56 functional-199910 containerd[3842]: time="2025-10-02T06:20:56.886275527Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
Oct 02 06:20:57 functional-199910 containerd[3842]: time="2025-10-02T06:20:57.461979996Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
Oct 02 06:20:59 functional-199910 containerd[3842]: time="2025-10-02T06:20:59.089171758Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Oct 02 06:20:59 functional-199910 containerd[3842]: time="2025-10-02T06:20:59.089223010Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11046"
Oct 02 06:22:19 functional-199910 containerd[3842]: time="2025-10-02T06:22:19.884785044Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
Oct 02 06:22:19 functional-199910 containerd[3842]: time="2025-10-02T06:22:19.886572046Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
Oct 02 06:22:20 functional-199910 containerd[3842]: time="2025-10-02T06:22:20.500547860Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
Oct 02 06:22:22 functional-199910 containerd[3842]: time="2025-10-02T06:22:22.127605836Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Oct 02 06:22:22 functional-199910 containerd[3842]: time="2025-10-02T06:22:22.127686885Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11015"
Oct 02 06:22:28 functional-199910 containerd[3842]: time="2025-10-02T06:22:28.884730907Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
Oct 02 06:22:28 functional-199910 containerd[3842]: time="2025-10-02T06:22:28.886326946Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
Oct 02 06:22:29 functional-199910 containerd[3842]: time="2025-10-02T06:22:29.462579836Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
Oct 02 06:22:31 functional-199910 containerd[3842]: time="2025-10-02T06:22:31.105354289Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Oct 02 06:22:31 functional-199910 containerd[3842]: time="2025-10-02T06:22:31.105447128Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11047"
==> coredns [f0bafa2f4b2c3011baa87254e1977f39f0be514d931e8c686e86c0aa29d3b6ff] <==
maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
.:53
[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
CoreDNS-1.12.1
linux/amd64, go1.24.1, 707c7c1
[INFO] 127.0.0.1:34140 - 64363 "HINFO IN 6421585372913567829.3122299182694518145. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.036751794s
==> coredns [ff9cd2a0d98dc28d87af1e35cad013fc327b6424af6df9d7e63b16213372132f] <==
maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
.:53
[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
CoreDNS-1.12.1
linux/amd64, go1.24.1, 707c7c1
[INFO] 127.0.0.1:58400 - 64231 "HINFO IN 6580085158149520847.7999093773305824839. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.113539547s
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
==> describe nodes <==
Name: functional-199910
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=functional-199910
kubernetes.io/os=linux
minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb
minikube.k8s.io/name=functional-199910
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_10_02T06_12_09_0700
minikube.k8s.io/version=v1.37.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Thu, 02 Oct 2025 06:12:06 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: functional-199910
AcquireTime: <unset>
RenewTime: Thu, 02 Oct 2025 06:24:13 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Thu, 02 Oct 2025 06:20:09 +0000 Thu, 02 Oct 2025 06:12:04 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 02 Oct 2025 06:20:09 +0000 Thu, 02 Oct 2025 06:12:04 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 02 Oct 2025 06:20:09 +0000 Thu, 02 Oct 2025 06:12:04 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Thu, 02 Oct 2025 06:20:09 +0000 Thu, 02 Oct 2025 06:12:25 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.49.2
Hostname: functional-199910
Capacity:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32863456Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32863456Ki
pods: 110
System Info:
Machine ID: 26786d65ffe140308153a7ab60e7851e
System UUID: d452a066-3b39-4b2a-bb48-a6d5f3f27351
Boot ID: 928ae711-d7b1-4c1e-8d35-81d1dcf6c7b5
Kernel Version: 6.8.0-1041-gcp
OS Image: Debian GNU/Linux 12 (bookworm)
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.7.28
Kubelet Version: v1.34.1
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (15 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default hello-node-75c85bcc94-w8zxz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10m
default hello-node-connect-7d85dfc575-6vrx2 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10m
default mysql-5bb876957f-cvvj2 600m (7%) 700m (8%) 512Mi (1%) 700Mi (2%) 10m
default nginx-svc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10m
default sp-pod 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10m
kube-system coredns-66bc5c9577-lfbdz 100m (1%) 0 (0%) 70Mi (0%) 170Mi (0%) 12m
kube-system etcd-functional-199910 100m (1%) 0 (0%) 100Mi (0%) 0 (0%) 12m
kube-system kindnet-nlvlv 100m (1%) 100m (1%) 50Mi (0%) 50Mi (0%) 12m
kube-system kube-apiserver-functional-199910 250m (3%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system kube-controller-manager-functional-199910 200m (2%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system kube-proxy-6fsg9 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system kube-scheduler-functional-199910 100m (1%) 0 (0%) 0 (0%) 0 (0%) 12m
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m
kubernetes-dashboard dashboard-metrics-scraper-77bf4d6c4c-4mzf5 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m
kubernetes-dashboard kubernetes-dashboard-855c9754f9-vlp7x 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 1450m (18%) 800m (10%)
memory 732Mi (2%) 920Mi (2%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 12m kube-proxy
Normal Starting 11m kube-proxy
Normal NodeAllocatableEnforced 12m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 12m kubelet Node functional-199910 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 12m kubelet Node functional-199910 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 12m kubelet Node functional-199910 status is now: NodeHasSufficientPID
Normal Starting 12m kubelet Starting kubelet.
Normal RegisteredNode 12m node-controller Node functional-199910 event: Registered Node functional-199910 in Controller
Normal NodeReady 11m kubelet Node functional-199910 status is now: NodeReady
Normal Starting 11m kubelet Starting kubelet.
Normal NodeHasSufficientMemory 11m (x8 over 11m) kubelet Node functional-199910 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 11m (x8 over 11m) kubelet Node functional-199910 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 11m (x7 over 11m) kubelet Node functional-199910 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 11m kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 11m node-controller Node functional-199910 event: Registered Node functional-199910 in Controller
==> dmesg <==
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff c2 5d 35 94 2e 01 08 06
[ +0.058144] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff 62 ab 58 d0 fd cd 08 06
[ +7.548229] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 4a fd c4 dd c4 6d 08 06
[Oct 2 05:45] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff f6 b4 f2 37 23 6e 08 06
[ +8.618588] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 8e d9 8c 9f 19 f9 08 06
[ +0.000520] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 4a fd c4 dd c4 6d 08 06
[ +0.839544] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 42 98 6a 33 ef 13 08 06
[ +18.414075] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 16 fd ef 12 40 02 08 06
[ +0.000360] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
[ +0.000004] ll header: 00000000: ff ff ff ff ff ff 42 98 6a 33 ef 13 08 06
[ +5.829441] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 8a 7c 73 c3 88 96 08 06
[ +0.000311] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 62 ab 58 d0 fd cd 08 06
[ +15.373470] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 de db d2 97 bd 08 06
[ +0.000392] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 b4 f2 37 23 6e 08 06
==> etcd [08e26663c7e447c1795a392880992c67b7efa0c467f1cc535f872ec73d63ad38] <==
{"level":"warn","ts":"2025-10-02T06:12:05.810669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56862","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T06:12:05.816571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56890","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T06:12:05.822847Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56910","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T06:12:05.835996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56940","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T06:12:05.842577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56958","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T06:12:05.849081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56978","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T06:12:05.894403Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57004","server-name":"","error":"EOF"}
{"level":"info","ts":"2025-10-02T06:12:50.300585Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2025-10-02T06:12:50.300664Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-199910","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
{"level":"error","ts":"2025-10-02T06:12:50.300764Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
{"level":"error","ts":"2025-10-02T06:12:57.302505Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"warn","ts":"2025-10-02T06:12:57.302730Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"warn","ts":"2025-10-02T06:12:57.303059Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"error","ts":"2025-10-02T06:12:57.303085Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"info","ts":"2025-10-02T06:12:57.302839Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
{"level":"info","ts":"2025-10-02T06:12:57.303142Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
{"level":"info","ts":"2025-10-02T06:12:57.303158Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
{"level":"error","ts":"2025-10-02T06:12:57.302240Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
{"level":"warn","ts":"2025-10-02T06:12:57.303485Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
{"level":"warn","ts":"2025-10-02T06:12:57.303507Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
{"level":"error","ts":"2025-10-02T06:12:57.303516Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"info","ts":"2025-10-02T06:12:57.304975Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
{"level":"error","ts":"2025-10-02T06:12:57.305029Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"info","ts":"2025-10-02T06:12:57.305061Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2025-10-02T06:12:57.305075Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-199910","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
==> etcd [73ee8e97a78602fac85e902429f6351307dfd75d4e432518b658ad78e1e9d2b1] <==
{"level":"warn","ts":"2025-10-02T06:13:10.328017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50878","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T06:13:10.335433Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50894","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T06:13:10.342042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50906","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T06:13:10.352795Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50930","server-name":"","error":"read tcp 127.0.0.1:2379->127.0.0.1:50930: read: connection reset by peer"}
{"level":"warn","ts":"2025-10-02T06:13:10.360606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50954","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T06:13:10.367410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50976","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T06:13:10.373004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50980","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T06:13:10.379059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50988","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T06:13:10.384978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51016","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T06:13:10.391903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51042","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T06:13:10.398291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51064","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T06:13:10.404172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51070","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T06:13:10.410209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51082","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T06:13:10.415890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51120","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T06:13:10.421667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51126","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T06:13:10.427687Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51146","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T06:13:10.433534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51160","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T06:13:10.448100Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51196","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T06:13:10.451132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51204","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T06:13:10.456833Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51230","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T06:13:10.462331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51254","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-02T06:13:10.511748Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51280","server-name":"","error":"EOF"}
{"level":"info","ts":"2025-10-02T06:23:10.057111Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1114}
{"level":"info","ts":"2025-10-02T06:23:10.075219Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1114,"took":"17.775311ms","hash":1240741800,"current-db-size-bytes":3690496,"current-db-size":"3.7 MB","current-db-size-in-use-bytes":1880064,"current-db-size-in-use":"1.9 MB"}
{"level":"info","ts":"2025-10-02T06:23:10.075258Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1240741800,"revision":1114,"compact-revision":-1}
==> kernel <==
06:24:16 up 2:06, 0 user, load average: 0.08, 0.30, 0.62
Linux functional-199910 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
==> kindnet [0e6365f8d4553ed3a43b91cd235f10a301c95e3ea2ce0b800d81621f382c5540] <==
I1002 06:12:15.314795 1 main.go:109] connected to apiserver: https://10.96.0.1:443
I1002 06:12:15.315055 1 main.go:139] hostIP = 192.168.49.2
podIP = 192.168.49.2
I1002 06:12:15.315240 1 main.go:148] setting mtu 1500 for CNI
I1002 06:12:15.315261 1 main.go:178] kindnetd IP family: "ipv4"
I1002 06:12:15.315278 1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
time="2025-10-02T06:12:15Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
I1002 06:12:15.436330 1 controller.go:377] "Starting controller" name="kube-network-policies"
I1002 06:12:15.436387 1 controller.go:381] "Waiting for informer caches to sync"
I1002 06:12:15.436773 1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
I1002 06:12:15.614228 1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
I1002 06:12:15.914278 1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
I1002 06:12:15.914307 1 metrics.go:72] Registering metrics
I1002 06:12:15.914403 1 controller.go:711] "Syncing nftables rules"
I1002 06:12:25.437165 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1002 06:12:25.437244 1 main.go:301] handling current node
I1002 06:12:35.444020 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1002 06:12:35.444049 1 main.go:301] handling current node
I1002 06:12:45.439055 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1002 06:12:45.439091 1 main.go:301] handling current node
==> kindnet [ddd36a4ac2f9f0eb8d3a5fb2bf60cae5868d293c2b8e98bf9f4f9f13c884ba40] <==
I1002 06:22:11.334785 1 main.go:301] handling current node
I1002 06:22:21.334694 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1002 06:22:21.334726 1 main.go:301] handling current node
I1002 06:22:31.340901 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1002 06:22:31.340954 1 main.go:301] handling current node
I1002 06:22:41.334569 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1002 06:22:41.334622 1 main.go:301] handling current node
I1002 06:22:51.332667 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1002 06:22:51.332703 1 main.go:301] handling current node
I1002 06:23:01.331741 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1002 06:23:01.331831 1 main.go:301] handling current node
I1002 06:23:11.333511 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1002 06:23:11.333545 1 main.go:301] handling current node
I1002 06:23:21.335778 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1002 06:23:21.335809 1 main.go:301] handling current node
I1002 06:23:31.340881 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1002 06:23:31.340935 1 main.go:301] handling current node
I1002 06:23:41.339996 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1002 06:23:41.340031 1 main.go:301] handling current node
I1002 06:23:51.332064 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1002 06:23:51.332128 1 main.go:301] handling current node
I1002 06:24:01.340172 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1002 06:24:01.340229 1 main.go:301] handling current node
I1002 06:24:11.340037 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1002 06:24:11.340080 1 main.go:301] handling current node
==> kube-apiserver [c2503d55a98b117f95312339e0408d46140df26986746e655ee44ca4b17d1543] <==
I1002 06:13:10.968328 1 cache.go:39] Caches are synced for autoregister controller
I1002 06:13:10.973719 1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
I1002 06:13:11.003861 1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
I1002 06:13:11.864528 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I1002 06:13:12.011502 1 controller.go:667] quota admission added evaluator for: serviceaccounts
W1002 06:13:12.168303 1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
I1002 06:13:12.169465 1 controller.go:667] quota admission added evaluator for: endpoints
I1002 06:13:12.174164 1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
I1002 06:13:12.718173 1 controller.go:667] quota admission added evaluator for: daemonsets.apps
I1002 06:13:12.803056 1 controller.go:667] quota admission added evaluator for: deployments.apps
I1002 06:13:12.845127 1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1002 06:13:12.850799 1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I1002 06:13:27.458495 1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.96.42.200"}
I1002 06:13:32.001334 1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.106.74.230"}
I1002 06:13:32.038247 1 controller.go:667] quota admission added evaluator for: replicasets.apps
I1002 06:13:34.269456 1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.98.97.18"}
I1002 06:13:39.536669 1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.105.16.67"}
E1002 06:13:47.153716 1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:57876: use of closed network connection
E1002 06:13:48.672727 1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:57894: use of closed network connection
E1002 06:13:50.778146 1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:50950: use of closed network connection
I1002 06:13:50.901947 1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.100.248.3"}
I1002 06:19:16.194630 1 controller.go:667] quota admission added evaluator for: namespaces
I1002 06:19:16.288521 1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.174.148"}
I1002 06:19:16.299708 1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.133.92"}
I1002 06:23:10.900793 1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
==> kube-controller-manager [0826543035037388cfadc1a20ffaeab10d0bd916e3a71611034dc156659ba3d3] <==
I1002 06:13:14.311871 1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
I1002 06:13:14.311885 1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
I1002 06:13:14.312075 1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
I1002 06:13:14.313120 1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
I1002 06:13:14.313147 1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
I1002 06:13:14.313172 1 shared_informer.go:356] "Caches are synced" controller="stateful set"
I1002 06:13:14.313212 1 shared_informer.go:356] "Caches are synced" controller="cronjob"
I1002 06:13:14.313272 1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
I1002 06:13:14.313932 1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
I1002 06:13:14.313956 1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
I1002 06:13:14.316028 1 shared_informer.go:356] "Caches are synced" controller="endpoint"
I1002 06:13:14.318325 1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
I1002 06:13:14.318470 1 shared_informer.go:356] "Caches are synced" controller="resource quota"
I1002 06:13:14.321892 1 shared_informer.go:356] "Caches are synced" controller="namespace"
I1002 06:13:14.323622 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
I1002 06:13:14.323638 1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
I1002 06:13:14.323648 1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
I1002 06:13:14.325834 1 shared_informer.go:356] "Caches are synced" controller="job"
I1002 06:13:14.331979 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
E1002 06:19:16.235527 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1002 06:19:16.239715 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1002 06:19:16.242312 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1002 06:19:16.244051 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1002 06:19:16.245736 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1002 06:19:16.250431 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
==> kube-controller-manager [a46012c8d77f4d59e1edfd2706d75b7ed32740ed7840142c8b6a163ad8125ce4] <==
I1002 06:12:58.441929 1 serving.go:386] Generated self-signed cert in-memory
I1002 06:12:59.039518 1 controllermanager.go:191] "Starting" version="v1.34.1"
I1002 06:12:59.039550 1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1002 06:12:59.041673 1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I1002 06:12:59.041844 1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I1002 06:12:59.046004 1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
I1002 06:12:59.046753 1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
I1002 06:12:59.057653 1 controllermanager.go:781] "Started controller" controller="serviceaccount-token-controller"
I1002 06:12:59.057945 1 shared_informer.go:349] "Waiting for caches to sync" controller="tokens"
I1002 06:13:00.446798 1 controllermanager.go:781] "Started controller" controller="serviceaccount-controller"
I1002 06:13:00.446836 1 serviceaccounts_controller.go:114] "Starting service account controller" logger="serviceaccount-controller"
I1002 06:13:00.446852 1 shared_informer.go:349] "Waiting for caches to sync" controller="service account"
F1002 06:13:00.447274 1 client_builder_dynamic.go:154] Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/serviceaccounts/pvc-protection-controller": dial tcp 192.168.49.2:8441: connect: connection refused
==> kube-proxy [97a61b088ac75deb72639eca5c93e3931e2d507d4d6d431bcb874a08c79f4fd8] <==
I1002 06:12:14.839850 1 server_linux.go:53] "Using iptables proxy"
I1002 06:12:14.903452 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I1002 06:12:15.004159 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1002 06:12:15.004210 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
E1002 06:12:15.004324 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1002 06:12:15.023563 1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I1002 06:12:15.023618 1 server_linux.go:132] "Using iptables Proxier"
I1002 06:12:15.029815 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1002 06:12:15.030570 1 server.go:527] "Version info" version="v1.34.1"
I1002 06:12:15.030605 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1002 06:12:15.032931 1 config.go:200] "Starting service config controller"
I1002 06:12:15.032981 1 config.go:403] "Starting serviceCIDR config controller"
I1002 06:12:15.033021 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1002 06:12:15.032994 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1002 06:12:15.033046 1 config.go:106] "Starting endpoint slice config controller"
I1002 06:12:15.033079 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1002 06:12:15.033118 1 config.go:309] "Starting node config controller"
I1002 06:12:15.033128 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1002 06:12:15.033134 1 shared_informer.go:356] "Caches are synced" controller="node config"
I1002 06:12:15.133970 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
I1002 06:12:15.133970 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
I1002 06:12:15.133989 1 shared_informer.go:356] "Caches are synced" controller="service config"
==> kube-proxy [b42bb88d18439d6670f0d6210dbbdaf5fd87083935495b10801ef7cf68b6f13a] <==
I1002 06:12:50.973246 1 server_linux.go:53] "Using iptables proxy"
I1002 06:12:51.050576 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I1002 06:12:51.151332 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1002 06:12:51.151368 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
E1002 06:12:51.151463 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1002 06:12:51.172896 1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I1002 06:12:51.172973 1 server_linux.go:132] "Using iptables Proxier"
I1002 06:12:51.178306 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1002 06:12:51.178665 1 server.go:527] "Version info" version="v1.34.1"
I1002 06:12:51.178703 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1002 06:12:51.179784 1 config.go:403] "Starting serviceCIDR config controller"
I1002 06:12:51.179811 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1002 06:12:51.179813 1 config.go:200] "Starting service config controller"
I1002 06:12:51.179828 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1002 06:12:51.179858 1 config.go:106] "Starting endpoint slice config controller"
I1002 06:12:51.179864 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1002 06:12:51.179865 1 config.go:309] "Starting node config controller"
I1002 06:12:51.179883 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1002 06:12:51.179890 1 shared_informer.go:356] "Caches are synced" controller="node config"
I1002 06:12:51.280911 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
I1002 06:12:51.280956 1 shared_informer.go:356] "Caches are synced" controller="service config"
I1002 06:12:51.280964 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
==> kube-scheduler [53950e32492bd26e7a71310b1a5140df8125ff1d730bb46b83d84ae621fd3298] <==
E1002 06:12:06.276119 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1002 06:12:06.276159 1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
E1002 06:12:06.276232 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
E1002 06:12:06.276228 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
E1002 06:12:06.276252 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
E1002 06:12:06.276317 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
E1002 06:12:06.276332 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
E1002 06:12:06.276345 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E1002 06:12:06.276383 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1002 06:12:06.276398 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1002 06:12:07.085724 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E1002 06:12:07.157911 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1002 06:12:07.356949 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1002 06:12:07.371978 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1002 06:12:07.427138 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
E1002 06:12:07.458065 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E1002 06:12:07.475022 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1002 06:12:07.488961 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
I1002 06:12:07.672968 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1002 06:12:57.410262 1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
I1002 06:12:57.410597 1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1002 06:12:57.410616 1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
I1002 06:12:57.410653 1 server.go:263] "[graceful-termination] secure server has stopped listening"
I1002 06:12:57.410701 1 server.go:265] "[graceful-termination] secure server is exiting"
E1002 06:12:57.410723 1 run.go:72] "command failed" err="finished without leader elect"
==> kube-scheduler [cf1bb9911e32d0b099e600db16880966f3a50443fd65c345cbbf32cb28da5b5a] <==
I1002 06:12:58.351746 1 serving.go:386] Generated self-signed cert in-memory
I1002 06:12:59.028789 1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
I1002 06:12:59.028819 1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1002 06:12:59.033800 1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
I1002 06:12:59.033904 1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
I1002 06:12:59.033938 1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
I1002 06:12:59.033975 1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
I1002 06:12:59.034768 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1002 06:12:59.034785 1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1002 06:12:59.034808 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I1002 06:12:59.035924 1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I1002 06:12:59.134547 1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
I1002 06:12:59.135726 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1002 06:12:59.136328 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
E1002 06:13:10.905328 1 reflector.go:205] "Failed to watch" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1002 06:13:10.905345 1 reflector.go:205] "Failed to watch" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1002 06:13:10.905353 1 reflector.go:205] "Failed to watch" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1002 06:13:10.905408 1 reflector.go:205] "Failed to watch" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1002 06:13:10.905328 1 reflector.go:205] "Failed to watch" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
==> kubelet <==
Oct 02 06:23:21 functional-199910 kubelet[5002]: E1002 06:23:21.884573 5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="d446da26-c4e3-4120-8392-398030fe9f55"
Oct 02 06:23:26 functional-199910 kubelet[5002]: E1002 06:23:26.883388 5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-6vrx2" podUID="540c3ceb-ae4f-4dc0-b99d-354efb31c102"
Oct 02 06:23:30 functional-199910 kubelet[5002]: E1002 06:23:30.883991 5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-w8zxz" podUID="952a9830-06ce-4f87-9055-b90e32fb9805"
Oct 02 06:23:31 functional-199910 kubelet[5002]: E1002 06:23:31.883978 5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vlp7x" podUID="8c621264-b906-43b5-8018-d110a9711c8c"
Oct 02 06:23:33 functional-199910 kubelet[5002]: E1002 06:23:33.883568 5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-4mzf5" podUID="ca2b87c9-5e0e-4d06-9349-7f8d
328f5c5d"
Oct 02 06:23:36 functional-199910 kubelet[5002]: E1002 06:23:36.883981 5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="30225e93-f680-46a4-ad8c-b4adbd692c1f"
Oct 02 06:23:36 functional-199910 kubelet[5002]: E1002 06:23:36.884673 5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="d446da26-c4e3-4120-8392-398030fe9f55"
Oct 02 06:23:41 functional-199910 kubelet[5002]: E1002 06:23:41.883828 5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-6vrx2" podUID="540c3ceb-ae4f-4dc0-b99d-354efb31c102"
Oct 02 06:23:42 functional-199910 kubelet[5002]: E1002 06:23:42.884256 5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vlp7x" podUID="8c621264-b906-43b5-8018-d110a9711c8c"
Oct 02 06:23:43 functional-199910 kubelet[5002]: E1002 06:23:43.883948 5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-w8zxz" podUID="952a9830-06ce-4f87-9055-b90e32fb9805"
Oct 02 06:23:47 functional-199910 kubelet[5002]: E1002 06:23:47.884012 5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-4mzf5" podUID="ca2b87c9-5e0e-4d06-9349-7f8d
328f5c5d"
Oct 02 06:23:50 functional-199910 kubelet[5002]: E1002 06:23:50.883817 5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="30225e93-f680-46a4-ad8c-b4adbd692c1f"
Oct 02 06:23:51 functional-199910 kubelet[5002]: E1002 06:23:51.884534 5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="d446da26-c4e3-4120-8392-398030fe9f55"
Oct 02 06:23:53 functional-199910 kubelet[5002]: E1002 06:23:53.884011 5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-6vrx2" podUID="540c3ceb-ae4f-4dc0-b99d-354efb31c102"
Oct 02 06:23:53 functional-199910 kubelet[5002]: E1002 06:23:53.884701 5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vlp7x" podUID="8c621264-b906-43b5-8018-d110a9711c8c"
Oct 02 06:23:54 functional-199910 kubelet[5002]: E1002 06:23:54.883805 5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-w8zxz" podUID="952a9830-06ce-4f87-9055-b90e32fb9805"
Oct 02 06:23:58 functional-199910 kubelet[5002]: E1002 06:23:58.884853 5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-4mzf5" podUID="ca2b87c9-5e0e-4d06-9349-7f8d
328f5c5d"
Oct 02 06:24:01 functional-199910 kubelet[5002]: E1002 06:24:01.883363 5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="30225e93-f680-46a4-ad8c-b4adbd692c1f"
Oct 02 06:24:04 functional-199910 kubelet[5002]: E1002 06:24:04.884168 5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="d446da26-c4e3-4120-8392-398030fe9f55"
Oct 02 06:24:06 functional-199910 kubelet[5002]: E1002 06:24:06.883720 5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-6vrx2" podUID="540c3ceb-ae4f-4dc0-b99d-354efb31c102"
Oct 02 06:24:06 functional-199910 kubelet[5002]: E1002 06:24:06.884491 5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vlp7x" podUID="8c621264-b906-43b5-8018-d110a9711c8c"
Oct 02 06:24:08 functional-199910 kubelet[5002]: E1002 06:24:08.883646 5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-w8zxz" podUID="952a9830-06ce-4f87-9055-b90e32fb9805"
Oct 02 06:24:11 functional-199910 kubelet[5002]: E1002 06:24:11.884475 5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-4mzf5" podUID="ca2b87c9-5e0e-4d06-9349-7f8d
328f5c5d"
Oct 02 06:24:14 functional-199910 kubelet[5002]: E1002 06:24:14.883485 5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="30225e93-f680-46a4-ad8c-b4adbd692c1f"
Oct 02 06:24:16 functional-199910 kubelet[5002]: E1002 06:24:16.884358 5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="d446da26-c4e3-4120-8392-398030fe9f55"
==> storage-provisioner [80952a8c291275da5e35ad70882231fdd9a7d83825c994efd21bb4f51557d477] <==
I1002 06:12:26.077595 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-199910_0d7cdd1d-c7b6-42fc-bd8c-cd81e39b8dd7!
W1002 06:12:27.985165 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 06:12:27.988636 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 06:12:29.991413 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 06:12:29.995879 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 06:12:31.999091 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 06:12:32.003058 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 06:12:34.006471 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 06:12:34.010034 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 06:12:36.013002 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 06:12:36.017547 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 06:12:38.019738 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 06:12:38.023243 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 06:12:40.026764 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 06:12:40.030794 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 06:12:42.033995 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 06:12:42.037680 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 06:12:44.041227 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 06:12:44.046725 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 06:12:46.050103 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 06:12:46.053714 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 06:12:48.056100 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 06:12:48.059659 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 06:12:50.063069 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 06:12:50.067736 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
==> storage-provisioner [fdcf1ac8db1d92b12e319a16fb30394c6f6dce3eecf4e5c46c0c0f15efea87df] <==
W1002 06:23:51.480675 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 06:23:53.483328 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 06:23:53.487976 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 06:23:55.492808 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 06:23:55.496479 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 06:23:57.498650 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 06:23:57.501995 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 06:23:59.505191 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 06:23:59.508617 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 06:24:01.511615 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 06:24:01.515208 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 06:24:03.518551 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 06:24:03.522028 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 06:24:05.525368 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 06:24:05.530483 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 06:24:07.533367 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 06:24:07.536775 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 06:24:09.538772 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 06:24:09.542901 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 06:24:11.545779 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 06:24:11.549434 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 06:24:13.552714 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 06:24:13.557025 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 06:24:15.559674 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1002 06:24:15.563544 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
-- /stdout --
helpers_test.go:262: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-199910 -n functional-199910
helpers_test.go:269: (dbg) Run: kubectl --context functional-199910 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-w8zxz hello-node-connect-7d85dfc575-6vrx2 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-4mzf5 kubernetes-dashboard-855c9754f9-vlp7x
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run: kubectl --context functional-199910 describe pod busybox-mount hello-node-75c85bcc94-w8zxz hello-node-connect-7d85dfc575-6vrx2 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-4mzf5 kubernetes-dashboard-855c9754f9-vlp7x
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-199910 describe pod busybox-mount hello-node-75c85bcc94-w8zxz hello-node-connect-7d85dfc575-6vrx2 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-4mzf5 kubernetes-dashboard-855c9754f9-vlp7x: exit status 1 (87.466456ms)
-- stdout --
Name: busybox-mount
Namespace: default
Priority: 0
Service Account: default
Node: functional-199910/192.168.49.2
Start Time: Thu, 02 Oct 2025 06:19:04 +0000
Labels: integration-test=busybox-mount
Annotations: <none>
Status: Succeeded
IP: 10.244.0.9
IPs:
IP: 10.244.0.9
Containers:
mount-munger:
Container ID: containerd://30b85d3947fb716365efd7ebc1d9aa1ae0a31acc10a239c45a439219b7aacac2
Image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID: gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
Port: <none>
Host Port: <none>
Command:
/bin/sh
-c
--
Args:
cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 02 Oct 2025 06:19:06 +0000
Finished: Thu, 02 Oct 2025 06:19:06 +0000
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/mount-9p from test-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5gvnd (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
test-volume:
Type: HostPath (bare host directory volume)
Path: /mount-9p
HostPathType:
kube-api-access-5gvnd:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m12s default-scheduler Successfully assigned default/busybox-mount to functional-199910
Normal Pulling 5m13s kubelet Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Normal Pulled 5m11s kubelet Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.042s (2.042s including waiting). Image size: 2395207 bytes.
Normal Created 5m11s kubelet Created container: mount-munger
Normal Started 5m11s kubelet Started container mount-munger
Name: hello-node-75c85bcc94-w8zxz
Namespace: default
Priority: 0
Service Account: default
Node: functional-199910/192.168.49.2
Start Time: Thu, 02 Oct 2025 06:13:50 +0000
Labels: app=hello-node
pod-template-hash=75c85bcc94
Annotations: <none>
Status: Pending
IP: 10.244.0.8
IPs:
IP: 10.244.0.8
Controlled By: ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:
Image: kicbase/echo-server
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hfndz (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-hfndz:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 10m default-scheduler Successfully assigned default/hello-node-75c85bcc94-w8zxz to functional-199910
Warning Failed 10m kubelet Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal Pulling 7m25s (x5 over 10m) kubelet Pulling image "kicbase/echo-server"
Warning Failed 7m22s (x4 over 10m) kubelet Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning Failed 7m22s (x5 over 10m) kubelet Error: ErrImagePull
Normal BackOff 23s (x42 over 10m) kubelet Back-off pulling image "kicbase/echo-server"
Warning Failed 23s (x42 over 10m) kubelet Error: ImagePullBackOff
Name: hello-node-connect-7d85dfc575-6vrx2
Namespace: default
Priority: 0
Service Account: default
Node: functional-199910/192.168.49.2
Start Time: Thu, 02 Oct 2025 06:13:39 +0000
Labels: app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations: <none>
Status: Pending
IP: 10.244.0.6
IPs:
IP: 10.244.0.6
Controlled By: ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:
Image: kicbase/echo-server
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kt2br (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-kt2br:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 10m default-scheduler Successfully assigned default/hello-node-connect-7d85dfc575-6vrx2 to functional-199910
Normal Pulling 7m28s (x5 over 10m) kubelet Pulling image "kicbase/echo-server"
Warning Failed 7m25s (x5 over 10m) kubelet Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning Failed 7m25s (x5 over 10m) kubelet Error: ErrImagePull
Normal BackOff 36s (x40 over 10m) kubelet Back-off pulling image "kicbase/echo-server"
Warning Failed 24s (x41 over 10m) kubelet Error: ImagePullBackOff
Name: nginx-svc
Namespace: default
Priority: 0
Service Account: default
Node: functional-199910/192.168.49.2
Start Time: Thu, 02 Oct 2025 06:13:34 +0000
Labels: run=nginx-svc
Annotations: <none>
Status: Pending
IP: 10.244.0.5
IPs:
IP: 10.244.0.5
Containers:
nginx:
Container ID:
Image: docker.io/nginx:alpine
Image ID:
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j2hd7 (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-j2hd7:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 10m default-scheduler Successfully assigned default/nginx-svc to functional-199910
Warning Failed 10m kubelet Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal Pulling 7m31s (x5 over 10m) kubelet Pulling image "docker.io/nginx:alpine"
Warning Failed 7m28s (x5 over 10m) kubelet Error: ErrImagePull
Warning Failed 7m28s (x4 over 10m) kubelet Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal BackOff 41s (x39 over 10m) kubelet Back-off pulling image "docker.io/nginx:alpine"
Warning Failed 26s (x40 over 10m) kubelet Error: ImagePullBackOff
Name: sp-pod
Namespace: default
Priority: 0
Service Account: default
Node: functional-199910/192.168.49.2
Start Time: Thu, 02 Oct 2025 06:13:40 +0000
Labels: test=storage-provisioner
Annotations: <none>
Status: Pending
IP: 10.244.0.7
IPs:
IP: 10.244.0.7
Containers:
myfrontend:
Container ID:
Image: docker.io/nginx
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6brnk (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
mypd:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: myclaim
ReadOnly: false
kube-api-access-6brnk:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 10m default-scheduler Successfully assigned default/sp-pod to functional-199910
Warning Failed 8m58s (x4 over 10m) kubelet Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal Pulling 7m37s (x5 over 10m) kubelet Pulling image "docker.io/nginx"
Warning Failed 7m34s (x5 over 10m) kubelet Error: ErrImagePull
Warning Failed 7m34s kubelet Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:17ae566734b63632e543c907ba74757e0c1a25d812ab9f10a07a6bed98dd199c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal BackOff 27s (x40 over 10m) kubelet Back-off pulling image "docker.io/nginx"
Warning Failed 27s (x40 over 10m) kubelet Error: ImagePullBackOff
-- /stdout --
** stderr **
Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-4mzf5" not found
Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-vlp7x" not found
** /stderr **
helpers_test.go:287: kubectl --context functional-199910 describe pod busybox-mount hello-node-75c85bcc94-w8zxz hello-node-connect-7d85dfc575-6vrx2 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-4mzf5 kubernetes-dashboard-855c9754f9-vlp7x: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (302.05s)