=== RUN TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd
=== CONT TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-269105 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-269105 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-269105 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-269105 --alsologtostderr -v=1] stderr:
I1101 10:53:10.406195 2885313 out.go:360] Setting OutFile to fd 1 ...
I1101 10:53:10.407643 2885313 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 10:53:10.407657 2885313 out.go:374] Setting ErrFile to fd 2...
I1101 10:53:10.407663 2885313 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 10:53:10.407987 2885313 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-2847530/.minikube/bin
I1101 10:53:10.408274 2885313 mustload.go:66] Loading cluster: functional-269105
I1101 10:53:10.408697 2885313 config.go:182] Loaded profile config "functional-269105": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1101 10:53:10.409155 2885313 cli_runner.go:164] Run: docker container inspect functional-269105 --format={{.State.Status}}
I1101 10:53:10.427185 2885313 host.go:66] Checking if "functional-269105" exists ...
I1101 10:53:10.427488 2885313 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1101 10:53:10.483420 2885313 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-01 10:53:10.474427151 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1101 10:53:10.483534 2885313 api_server.go:166] Checking apiserver status ...
I1101 10:53:10.483597 2885313 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1101 10:53:10.483638 2885313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-269105
I1101 10:53:10.501497 2885313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36806 SSHKeyPath:/home/jenkins/minikube-integration/21830-2847530/.minikube/machines/functional-269105/id_rsa Username:docker}
I1101 10:53:10.618203 2885313 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4797/cgroup
I1101 10:53:10.626252 2885313 api_server.go:182] apiserver freezer: "9:freezer:/docker/24a1cb67b38d4b1470e607d3e0af99a07b60c1f7ab1c1ac056af873df56f9224/kubepods/burstable/podc806d046dbcb3721a03bcba9e599052c/e51a5e843eeba1270878d73f1eec896c3fe87319d27a3d712d4daa31006c64e2"
I1101 10:53:10.626361 2885313 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/24a1cb67b38d4b1470e607d3e0af99a07b60c1f7ab1c1ac056af873df56f9224/kubepods/burstable/podc806d046dbcb3721a03bcba9e599052c/e51a5e843eeba1270878d73f1eec896c3fe87319d27a3d712d4daa31006c64e2/freezer.state
I1101 10:53:10.634951 2885313 api_server.go:204] freezer state: "THAWED"
I1101 10:53:10.634992 2885313 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I1101 10:53:10.643316 2885313 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W1101 10:53:10.643361 2885313 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1101 10:53:10.643547 2885313 config.go:182] Loaded profile config "functional-269105": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1101 10:53:10.643560 2885313 addons.go:70] Setting dashboard=true in profile "functional-269105"
I1101 10:53:10.643568 2885313 addons.go:239] Setting addon dashboard=true in "functional-269105"
I1101 10:53:10.643595 2885313 host.go:66] Checking if "functional-269105" exists ...
I1101 10:53:10.644034 2885313 cli_runner.go:164] Run: docker container inspect functional-269105 --format={{.State.Status}}
I1101 10:53:10.664778 2885313 out.go:179] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1101 10:53:10.667809 2885313 out.go:179] - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1101 10:53:10.670583 2885313 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1101 10:53:10.670605 2885313 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1101 10:53:10.670673 2885313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-269105
I1101 10:53:10.688512 2885313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36806 SSHKeyPath:/home/jenkins/minikube-integration/21830-2847530/.minikube/machines/functional-269105/id_rsa Username:docker}
I1101 10:53:10.799605 2885313 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1101 10:53:10.799674 2885313 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1101 10:53:10.819757 2885313 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1101 10:53:10.819779 2885313 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1101 10:53:10.835269 2885313 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1101 10:53:10.835289 2885313 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1101 10:53:10.850658 2885313 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1101 10:53:10.850705 2885313 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1101 10:53:10.865364 2885313 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
I1101 10:53:10.865405 2885313 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1101 10:53:10.879225 2885313 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1101 10:53:10.879267 2885313 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1101 10:53:10.895774 2885313 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1101 10:53:10.895817 2885313 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1101 10:53:10.909437 2885313 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1101 10:53:10.909483 2885313 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1101 10:53:10.928111 2885313 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1101 10:53:10.928170 2885313 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1101 10:53:10.948370 2885313 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1101 10:53:11.758001 2885313 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p functional-269105 addons enable metrics-server
I1101 10:53:11.760845 2885313 addons.go:202] Writing out "functional-269105" config to set dashboard=true...
W1101 10:53:11.761134 2885313 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1101 10:53:11.761776 2885313 kapi.go:59] client config for functional-269105: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/functional-269105/client.crt", KeyFile:"/home/jenkins/minikube-integration/21830-2847530/.minikube/profiles/functional-269105/client.key", CAFile:"/home/jenkins/minikube-integration/21830-2847530/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21203d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1101 10:53:11.762314 2885313 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1101 10:53:11.762339 2885313 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1101 10:53:11.762347 2885313 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1101 10:53:11.762354 2885313 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1101 10:53:11.762358 2885313 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1101 10:53:11.781736 2885313 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard kubernetes-dashboard d6c4a0c2-3410-4e01-b475-fe1c09284579 798 0 2025-11-01 10:53:11 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-11-01 10:53:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.100.88.212,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.100.88.212],IPFamilies:[IPv4],AllocateLoadBalance
rNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1101 10:53:11.781899 2885313 out.go:285] * Launching proxy ...
* Launching proxy ...
I1101 10:53:11.781973 2885313 dashboard.go:154] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-269105 proxy --port 36195]
I1101 10:53:11.782331 2885313 dashboard.go:159] Waiting for kubectl to output host:port ...
I1101 10:53:11.834222 2885313 dashboard.go:177] proxy stdout: Starting to serve on 127.0.0.1:36195
W1101 10:53:11.834274 2885313 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I1101 10:53:11.866227 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[72b8fc1f-3426-45bb-b5e3-900cdad05f4f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:11 GMT]] Body:0x4000772e80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000282500 TLS:<nil>}
I1101 10:53:11.866309 2885313 retry.go:31] will retry after 71.178µs: Temporary Error: unexpected response code: 503
I1101 10:53:11.871082 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8b0f6482-e9e2-4a23-82c3-6d6eed07a0b3] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:11 GMT]] Body:0x4000772f00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000282640 TLS:<nil>}
I1101 10:53:11.871154 2885313 retry.go:31] will retry after 104.835µs: Temporary Error: unexpected response code: 503
I1101 10:53:11.875107 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ab0ea456-a8a1-449c-a25c-23e0ed5e4d1d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:11 GMT]] Body:0x4000772f80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000282780 TLS:<nil>}
I1101 10:53:11.875167 2885313 retry.go:31] will retry after 321.289µs: Temporary Error: unexpected response code: 503
I1101 10:53:11.878988 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[528c7557-3461-490c-86b2-7778d9baef68] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:11 GMT]] Body:0x4000773040 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40002828c0 TLS:<nil>}
I1101 10:53:11.879043 2885313 retry.go:31] will retry after 414.711µs: Temporary Error: unexpected response code: 503
I1101 10:53:11.882805 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[bc58630f-4344-4d7e-84b0-d5a214303a5e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:11 GMT]] Body:0x4000773140 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000282a00 TLS:<nil>}
I1101 10:53:11.882859 2885313 retry.go:31] will retry after 644.89µs: Temporary Error: unexpected response code: 503
I1101 10:53:11.886668 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ea3e75be-9d1d-4acc-833f-0fdaedd09f06] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:11 GMT]] Body:0x40007731c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000282b40 TLS:<nil>}
I1101 10:53:11.886741 2885313 retry.go:31] will retry after 762.562µs: Temporary Error: unexpected response code: 503
I1101 10:53:11.890615 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d46c9403-0234-4f57-9fb9-6da6493c68fc] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:11 GMT]] Body:0x4001668040 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004aa640 TLS:<nil>}
I1101 10:53:11.890676 2885313 retry.go:31] will retry after 735.291µs: Temporary Error: unexpected response code: 503
I1101 10:53:11.895594 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2d61e390-6957-4c9b-927c-b03120fb6e70] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:11 GMT]] Body:0x40016680c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004aaa00 TLS:<nil>}
I1101 10:53:11.895655 2885313 retry.go:31] will retry after 1.877097ms: Temporary Error: unexpected response code: 503
I1101 10:53:11.900752 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[92835e34-d21c-458d-a2a7-535b4c871e85] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:11 GMT]] Body:0x4001668140 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004aab40 TLS:<nil>}
I1101 10:53:11.900817 2885313 retry.go:31] will retry after 3.68662ms: Temporary Error: unexpected response code: 503
I1101 10:53:11.907676 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c87c808a-87f1-4fc5-b1db-fc0b30fc2c94] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:11 GMT]] Body:0x40016681c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004aac80 TLS:<nil>}
I1101 10:53:11.907736 2885313 retry.go:31] will retry after 5.451838ms: Temporary Error: unexpected response code: 503
I1101 10:53:11.916597 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[839855a8-20ec-47f7-81df-4346f0009d8f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:11 GMT]] Body:0x4001668240 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004aadc0 TLS:<nil>}
I1101 10:53:11.916656 2885313 retry.go:31] will retry after 3.895171ms: Temporary Error: unexpected response code: 503
I1101 10:53:11.923526 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[238b46fd-de63-43be-843f-10f325c5bc81] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:11 GMT]] Body:0x4000773f00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000282c80 TLS:<nil>}
I1101 10:53:11.923586 2885313 retry.go:31] will retry after 6.918512ms: Temporary Error: unexpected response code: 503
I1101 10:53:11.933537 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5cce4808-47ef-4103-ab2b-d218948adf53] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:11 GMT]] Body:0x4001668380 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000282dc0 TLS:<nil>}
I1101 10:53:11.933596 2885313 retry.go:31] will retry after 13.673795ms: Temporary Error: unexpected response code: 503
I1101 10:53:11.950508 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5ea5802a-a01c-4181-abc4-0afb15a4ff78] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:11 GMT]] Body:0x40015f0040 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000282f00 TLS:<nil>}
I1101 10:53:11.950567 2885313 retry.go:31] will retry after 23.111872ms: Temporary Error: unexpected response code: 503
I1101 10:53:11.978216 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[af0b87b4-c1ac-4022-b5a7-69bd8f5f42ea] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:11 GMT]] Body:0x4001668480 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004ab180 TLS:<nil>}
I1101 10:53:11.978296 2885313 retry.go:31] will retry after 38.245162ms: Temporary Error: unexpected response code: 503
I1101 10:53:12.020873 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0b4a9f21-3341-4371-91d7-ece3accd0fb7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:12 GMT]] Body:0x4001668540 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000283040 TLS:<nil>}
I1101 10:53:12.020940 2885313 retry.go:31] will retry after 40.213563ms: Temporary Error: unexpected response code: 503
I1101 10:53:12.065340 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[16be7f64-a9fe-4be5-b6c2-aa73f1e7ed8d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:12 GMT]] Body:0x40015f0180 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000283180 TLS:<nil>}
I1101 10:53:12.065413 2885313 retry.go:31] will retry after 78.371596ms: Temporary Error: unexpected response code: 503
I1101 10:53:12.147537 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[69b66fae-1150-410f-99d5-b5f4d878c66e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:12 GMT]] Body:0x4001668640 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004ab2c0 TLS:<nil>}
I1101 10:53:12.147601 2885313 retry.go:31] will retry after 73.326853ms: Temporary Error: unexpected response code: 503
I1101 10:53:12.226033 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[fe11373e-acb8-4a1f-8b42-b722e10320ef] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:12 GMT]] Body:0x40015f0280 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40002832c0 TLS:<nil>}
I1101 10:53:12.226115 2885313 retry.go:31] will retry after 75.369392ms: Temporary Error: unexpected response code: 503
I1101 10:53:12.305884 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[442fef44-31ec-4863-bc8f-7797aac3d329] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:12 GMT]] Body:0x40015f0300 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000283400 TLS:<nil>}
I1101 10:53:12.305942 2885313 retry.go:31] will retry after 255.926123ms: Temporary Error: unexpected response code: 503
I1101 10:53:12.566191 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[30af9a6d-7c74-4405-9d6a-521da8ce4b7a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:12 GMT]] Body:0x40015f0380 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000283540 TLS:<nil>}
I1101 10:53:12.566250 2885313 retry.go:31] will retry after 434.379889ms: Temporary Error: unexpected response code: 503
I1101 10:53:13.004865 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e92b6e10-5ab4-43e6-bc16-798a6717168b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:13 GMT]] Body:0x40015f0400 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000283680 TLS:<nil>}
I1101 10:53:13.004931 2885313 retry.go:31] will retry after 683.564587ms: Temporary Error: unexpected response code: 503
I1101 10:53:13.691321 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f4819e1a-1032-4754-99e0-592ef0414000] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:13 GMT]] Body:0x40016688c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004ab400 TLS:<nil>}
I1101 10:53:13.691387 2885313 retry.go:31] will retry after 456.152727ms: Temporary Error: unexpected response code: 503
I1101 10:53:14.151064 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b7505ef2-2263-4ab5-bc2d-7ef17c7cc44c] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:14 GMT]] Body:0x4001668980 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40002837c0 TLS:<nil>}
I1101 10:53:14.151126 2885313 retry.go:31] will retry after 930.392924ms: Temporary Error: unexpected response code: 503
I1101 10:53:15.084904 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a9216e64-24aa-4404-9dcf-4b7a4831f536] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:15 GMT]] Body:0x40015f0580 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004ab680 TLS:<nil>}
I1101 10:53:15.084979 2885313 retry.go:31] will retry after 2.485343078s: Temporary Error: unexpected response code: 503
I1101 10:53:17.573548 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[db8697ab-d114-4a28-80c4-0697f8fb4432] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:17 GMT]] Body:0x4001668a80 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000283900 TLS:<nil>}
I1101 10:53:17.573613 2885313 retry.go:31] will retry after 3.172164253s: Temporary Error: unexpected response code: 503
I1101 10:53:20.751076 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d59c3c95-1f08-404e-ba3c-9d1e062f1b46] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:20 GMT]] Body:0x4001668b40 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000283a40 TLS:<nil>}
I1101 10:53:20.751141 2885313 retry.go:31] will retry after 3.659852909s: Temporary Error: unexpected response code: 503
I1101 10:53:24.414974 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8f0a8229-51c6-403c-b65e-08ac46fabcba] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:24 GMT]] Body:0x4001668c00 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000283b80 TLS:<nil>}
I1101 10:53:24.415047 2885313 retry.go:31] will retry after 3.413096743s: Temporary Error: unexpected response code: 503
I1101 10:53:27.833568 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0182b091-7e8b-4d15-a673-e0a7758a5a84] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:27 GMT]] Body:0x40015f0700 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004abcc0 TLS:<nil>}
I1101 10:53:27.833628 2885313 retry.go:31] will retry after 4.404351659s: Temporary Error: unexpected response code: 503
I1101 10:53:32.241838 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cb73cbea-fe9f-40f9-a58c-e47b0b32db90] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:32 GMT]] Body:0x40015f07c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004abe00 TLS:<nil>}
I1101 10:53:32.241901 2885313 retry.go:31] will retry after 10.966516708s: Temporary Error: unexpected response code: 503
I1101 10:53:43.211269 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b7e757c8-9354-4ee5-be24-30f4881ac811] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:53:43 GMT]] Body:0x4001668d40 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000283cc0 TLS:<nil>}
I1101 10:53:43.211333 2885313 retry.go:31] will retry after 22.575667464s: Temporary Error: unexpected response code: 503
I1101 10:54:05.790639 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9603fdff-5249-4725-8024-51e01a8e7070] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:54:05 GMT]] Body:0x40015f08c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400032b400 TLS:<nil>}
I1101 10:54:05.790711 2885313 retry.go:31] will retry after 17.288476517s: Temporary Error: unexpected response code: 503
I1101 10:54:23.082188 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0db97852-564a-4cab-9a87-475a7c2aae1e] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:54:23 GMT]] Body:0x4001668e40 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400032b540 TLS:<nil>}
I1101 10:54:23.082251 2885313 retry.go:31] will retry after 30.078353988s: Temporary Error: unexpected response code: 503
I1101 10:54:53.164988 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d42d67f5-a698-4086-93a8-75c7e3ec465e] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:54:53 GMT]] Body:0x40015f09c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400032b680 TLS:<nil>}
I1101 10:54:53.165048 2885313 retry.go:31] will retry after 1m22.959111076s: Temporary Error: unexpected response code: 503
I1101 10:56:16.127468 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8bc1fd85-8cb8-406f-8942-a9848ab0f4db] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:56:16 GMT]] Body:0x40015f0080 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400032b7c0 TLS:<nil>}
I1101 10:56:16.127535 2885313 retry.go:31] will retry after 1m2.273111896s: Temporary Error: unexpected response code: 503
I1101 10:57:18.404658 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0079bbe3-35d4-4060-b9be-c9e0c15865c5] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:57:18 GMT]] Body:0x40016680c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400032b900 TLS:<nil>}
I1101 10:57:18.404731 2885313 retry.go:31] will retry after 37.387614013s: Temporary Error: unexpected response code: 503
I1101 10:57:55.796386 2885313 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a2102068-f986-4bda-9a62-43dd4e24a6a3] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:57:55 GMT]] Body:0x40015f0140 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400032ba40 TLS:<nil>}
I1101 10:57:55.796465 2885313 retry.go:31] will retry after 31.004224405s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run: docker inspect functional-269105
helpers_test.go:243: (dbg) docker inspect functional-269105:
-- stdout --
[
{
"Id": "24a1cb67b38d4b1470e607d3e0af99a07b60c1f7ab1c1ac056af873df56f9224",
"Created": "2025-11-01T10:50:44.925723589Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 2875010,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-11-01T10:50:44.999173611Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
"ResolvConfPath": "/var/lib/docker/containers/24a1cb67b38d4b1470e607d3e0af99a07b60c1f7ab1c1ac056af873df56f9224/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/24a1cb67b38d4b1470e607d3e0af99a07b60c1f7ab1c1ac056af873df56f9224/hostname",
"HostsPath": "/var/lib/docker/containers/24a1cb67b38d4b1470e607d3e0af99a07b60c1f7ab1c1ac056af873df56f9224/hosts",
"LogPath": "/var/lib/docker/containers/24a1cb67b38d4b1470e607d3e0af99a07b60c1f7ab1c1ac056af873df56f9224/24a1cb67b38d4b1470e607d3e0af99a07b60c1f7ab1c1ac056af873df56f9224-json.log",
"Name": "/functional-269105",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"functional-269105:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "functional-269105",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 4294967296,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 8589934592,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "24a1cb67b38d4b1470e607d3e0af99a07b60c1f7ab1c1ac056af873df56f9224",
"LowerDir": "/var/lib/docker/overlay2/50ed4e506a20c8539dad8bf357af86d13d6e0b1038e2fdb0c85fac0d21b181ec-init/diff:/var/lib/docker/overlay2/6ccbdc4e59211c61d83d46bc353aa66c1a8dd6bb2f77e16ffc85d068d750bbe6/diff",
"MergedDir": "/var/lib/docker/overlay2/50ed4e506a20c8539dad8bf357af86d13d6e0b1038e2fdb0c85fac0d21b181ec/merged",
"UpperDir": "/var/lib/docker/overlay2/50ed4e506a20c8539dad8bf357af86d13d6e0b1038e2fdb0c85fac0d21b181ec/diff",
"WorkDir": "/var/lib/docker/overlay2/50ed4e506a20c8539dad8bf357af86d13d6e0b1038e2fdb0c85fac0d21b181ec/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "functional-269105",
"Source": "/var/lib/docker/volumes/functional-269105/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "functional-269105",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8441/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "functional-269105",
"name.minikube.sigs.k8s.io": "functional-269105",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "2538e1277595786df7322eb87eed1ac089387f33cba65ce30c44c5c638511e7a",
"SandboxKey": "/var/run/docker/netns/2538e1277595",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "36806"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "36807"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "36810"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "36808"
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "36809"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"functional-269105": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "a2:96:6b:6a:f0:78",
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "abad2735c74ff3fa0465945d7ef9b035766ef74981eab5752e96fb447c0a5f1c",
"EndpointID": "d44c9fd95c440eda4875ce4016a71e802cd53b293953d2f843ca4205ca9bfc95",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"functional-269105",
"24a1cb67b38d"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:247: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p functional-269105 -n functional-269105
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run: out/minikube-linux-arm64 -p functional-269105 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-269105 logs -n 25: (1.473646905s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs:
-- stdout --
==> Audit <==
┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ image │ functional-269105 image load --daemon kicbase/echo-server:functional-269105 --alsologtostderr │ functional-269105 │ jenkins │ v1.37.0 │ 01 Nov 25 10:53 UTC │ 01 Nov 25 10:53 UTC │
│ image │ functional-269105 image ls │ functional-269105 │ jenkins │ v1.37.0 │ 01 Nov 25 10:53 UTC │ 01 Nov 25 10:53 UTC │
│ image │ functional-269105 image save kicbase/echo-server:functional-269105 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-269105 │ jenkins │ v1.37.0 │ 01 Nov 25 10:53 UTC │ 01 Nov 25 10:53 UTC │
│ image │ functional-269105 image rm kicbase/echo-server:functional-269105 --alsologtostderr │ functional-269105 │ jenkins │ v1.37.0 │ 01 Nov 25 10:53 UTC │ 01 Nov 25 10:53 UTC │
│ image │ functional-269105 image ls │ functional-269105 │ jenkins │ v1.37.0 │ 01 Nov 25 10:53 UTC │ 01 Nov 25 10:53 UTC │
│ image │ functional-269105 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-269105 │ jenkins │ v1.37.0 │ 01 Nov 25 10:53 UTC │ 01 Nov 25 10:53 UTC │
│ image │ functional-269105 image ls │ functional-269105 │ jenkins │ v1.37.0 │ 01 Nov 25 10:53 UTC │ 01 Nov 25 10:53 UTC │
│ image │ functional-269105 image save --daemon kicbase/echo-server:functional-269105 --alsologtostderr │ functional-269105 │ jenkins │ v1.37.0 │ 01 Nov 25 10:53 UTC │ 01 Nov 25 10:53 UTC │
│ ssh │ functional-269105 ssh sudo cat /etc/test/nested/copy/2849422/hosts │ functional-269105 │ jenkins │ v1.37.0 │ 01 Nov 25 10:53 UTC │ 01 Nov 25 10:53 UTC │
│ ssh │ functional-269105 ssh sudo cat /etc/ssl/certs/2849422.pem │ functional-269105 │ jenkins │ v1.37.0 │ 01 Nov 25 10:53 UTC │ 01 Nov 25 10:53 UTC │
│ ssh │ functional-269105 ssh sudo cat /usr/share/ca-certificates/2849422.pem │ functional-269105 │ jenkins │ v1.37.0 │ 01 Nov 25 10:53 UTC │ 01 Nov 25 10:53 UTC │
│ ssh │ functional-269105 ssh sudo cat /etc/ssl/certs/51391683.0 │ functional-269105 │ jenkins │ v1.37.0 │ 01 Nov 25 10:53 UTC │ 01 Nov 25 10:53 UTC │
│ ssh │ functional-269105 ssh sudo cat /etc/ssl/certs/28494222.pem │ functional-269105 │ jenkins │ v1.37.0 │ 01 Nov 25 10:53 UTC │ 01 Nov 25 10:53 UTC │
│ ssh │ functional-269105 ssh sudo cat /usr/share/ca-certificates/28494222.pem │ functional-269105 │ jenkins │ v1.37.0 │ 01 Nov 25 10:53 UTC │ 01 Nov 25 10:53 UTC │
│ ssh │ functional-269105 ssh sudo cat /etc/ssl/certs/3ec20f2e.0 │ functional-269105 │ jenkins │ v1.37.0 │ 01 Nov 25 10:53 UTC │ 01 Nov 25 10:53 UTC │
│ image │ functional-269105 image ls --format short --alsologtostderr │ functional-269105 │ jenkins │ v1.37.0 │ 01 Nov 25 10:53 UTC │ 01 Nov 25 10:53 UTC │
│ update-context │ functional-269105 update-context --alsologtostderr -v=2 │ functional-269105 │ jenkins │ v1.37.0 │ 01 Nov 25 10:53 UTC │ 01 Nov 25 10:53 UTC │
│ ssh │ functional-269105 ssh pgrep buildkitd │ functional-269105 │ jenkins │ v1.37.0 │ 01 Nov 25 10:53 UTC │ │
│ image │ functional-269105 image build -t localhost/my-image:functional-269105 testdata/build --alsologtostderr │ functional-269105 │ jenkins │ v1.37.0 │ 01 Nov 25 10:53 UTC │ 01 Nov 25 10:53 UTC │
│ image │ functional-269105 image ls │ functional-269105 │ jenkins │ v1.37.0 │ 01 Nov 25 10:53 UTC │ 01 Nov 25 10:53 UTC │
│ image │ functional-269105 image ls --format yaml --alsologtostderr │ functional-269105 │ jenkins │ v1.37.0 │ 01 Nov 25 10:53 UTC │ 01 Nov 25 10:53 UTC │
│ image │ functional-269105 image ls --format json --alsologtostderr │ functional-269105 │ jenkins │ v1.37.0 │ 01 Nov 25 10:53 UTC │ 01 Nov 25 10:53 UTC │
│ image │ functional-269105 image ls --format table --alsologtostderr │ functional-269105 │ jenkins │ v1.37.0 │ 01 Nov 25 10:53 UTC │ 01 Nov 25 10:53 UTC │
│ update-context │ functional-269105 update-context --alsologtostderr -v=2 │ functional-269105 │ jenkins │ v1.37.0 │ 01 Nov 25 10:53 UTC │ 01 Nov 25 10:53 UTC │
│ update-context │ functional-269105 update-context --alsologtostderr -v=2 │ functional-269105 │ jenkins │ v1.37.0 │ 01 Nov 25 10:53 UTC │ 01 Nov 25 10:53 UTC │
└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/11/01 10:53:10
Running on machine: ip-172-31-21-244
Binary: Built with gc go1.24.6 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1101 10:53:10.092170 2885162 out.go:360] Setting OutFile to fd 1 ...
I1101 10:53:10.092390 2885162 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 10:53:10.092413 2885162 out.go:374] Setting ErrFile to fd 2...
I1101 10:53:10.092434 2885162 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 10:53:10.092747 2885162 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-2847530/.minikube/bin
I1101 10:53:10.093161 2885162 out.go:368] Setting JSON to false
I1101 10:53:10.094206 2885162 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":70536,"bootTime":1761923854,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
I1101 10:53:10.094329 2885162 start.go:143] virtualization:
I1101 10:53:10.097765 2885162 out.go:179] * [functional-269105] minikube v1.37.0 on Ubuntu 20.04 (arm64)
I1101 10:53:10.100904 2885162 out.go:179] - MINIKUBE_LOCATION=21830
I1101 10:53:10.100975 2885162 notify.go:221] Checking for updates...
I1101 10:53:10.106866 2885162 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1101 10:53:10.109873 2885162 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/21830-2847530/kubeconfig
I1101 10:53:10.113422 2885162 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-2847530/.minikube
I1101 10:53:10.116395 2885162 out.go:179] - MINIKUBE_BIN=out/minikube-linux-arm64
I1101 10:53:10.119271 2885162 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1101 10:53:10.122573 2885162 config.go:182] Loaded profile config "functional-269105": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1101 10:53:10.123146 2885162 driver.go:422] Setting default libvirt URI to qemu:///system
I1101 10:53:10.151134 2885162 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
I1101 10:53:10.151239 2885162 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1101 10:53:10.231673 2885162 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-01 10:53:10.22209804 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1101 10:53:10.231778 2885162 docker.go:319] overlay module found
I1101 10:53:10.235202 2885162 out.go:179] * Using the docker driver based on existing profile
I1101 10:53:10.238334 2885162 start.go:309] selected driver: docker
I1101 10:53:10.238355 2885162 start.go:930] validating driver "docker" against &{Name:functional-269105 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-269105 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1101 10:53:10.238453 2885162 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1101 10:53:10.238571 2885162 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1101 10:53:10.313341 2885162 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-01 10:53:10.303701836 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1101 10:53:10.313765 2885162 cni.go:84] Creating CNI manager for ""
I1101 10:53:10.313826 2885162 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1101 10:53:10.313879 2885162 start.go:353] cluster config:
{Name:functional-269105 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-269105 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1101 10:53:10.317647 2885162 out.go:179] * dry-run validation complete!
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
c5ce950fccd37 1611cd07b61d5 5 minutes ago Exited mount-munger 0 4dd0baebf0726 busybox-mount default
f5190b42b4a13 ce2d2cda2d858 5 minutes ago Running echo-server 0 316e15de3b5bf hello-node-75c85bcc94-r6zbp default
2074895f39dce 46fabdd7f288c 5 minutes ago Running myfrontend 0 fc327f3b62760 sp-pod default
e002ff4226b22 ce2d2cda2d858 5 minutes ago Running echo-server 0 e7541b814d11e hello-node-connect-7d85dfc575-nfspr default
a1d1a8f616e8f cbad6347cca28 5 minutes ago Running nginx 0 89487537068a3 nginx-svc default
5fe425c496959 ba04bb24b9575 5 minutes ago Running storage-provisioner 2 7e48d6f447efa storage-provisioner kube-system
e51a5e843eeba 43911e833d64d 6 minutes ago Running kube-apiserver 0 b271ca0a3d8f1 kube-apiserver-functional-269105 kube-system
fa15b362c02f8 7eb2c6ff0c5a7 6 minutes ago Running kube-controller-manager 2 c32008206199f kube-controller-manager-functional-269105 kube-system
365bcd5f34ee3 a1894772a478e 6 minutes ago Running etcd 1 6b883b8e1ec4e etcd-functional-269105 kube-system
91c3f7ce6c557 ba04bb24b9575 6 minutes ago Exited storage-provisioner 1 7e48d6f447efa storage-provisioner kube-system
03b2b4b635c97 138784d87c9c5 6 minutes ago Running coredns 1 4c1b03c734999 coredns-66bc5c9577-crvrc kube-system
14ba992862f8d 05baa95f5142d 6 minutes ago Running kube-proxy 1 d1ed1e61fa70c kube-proxy-mwwf8 kube-system
2ebe0ddeb3952 b1a8c6f707935 6 minutes ago Running kindnet-cni 1 bafc54f24528b kindnet-fz7g5 kube-system
6b6a8c316335b 7eb2c6ff0c5a7 6 minutes ago Exited kube-controller-manager 1 c32008206199f kube-controller-manager-functional-269105 kube-system
e27b4d294d5a2 b5f57ec6b9867 6 minutes ago Running kube-scheduler 1 7b8367ea850bf kube-scheduler-functional-269105 kube-system
8e8aa255cf006 138784d87c9c5 6 minutes ago Exited coredns 0 4c1b03c734999 coredns-66bc5c9577-crvrc kube-system
adf77118d73f2 b1a8c6f707935 6 minutes ago Exited kindnet-cni 0 bafc54f24528b kindnet-fz7g5 kube-system
baed3d2fb8f06 05baa95f5142d 6 minutes ago Exited kube-proxy 0 d1ed1e61fa70c kube-proxy-mwwf8 kube-system
60cd23f465992 b5f57ec6b9867 7 minutes ago Exited kube-scheduler 0 7b8367ea850bf kube-scheduler-functional-269105 kube-system
c0dd00cac2ad0 a1894772a478e 7 minutes ago Exited etcd 0 6b883b8e1ec4e etcd-functional-269105 kube-system
==> containerd <==
Nov 01 10:53:52 functional-269105 containerd[3607]: time="2025-11-01T10:53:52.174799693Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
Nov 01 10:53:52 functional-269105 containerd[3607]: time="2025-11-01T10:53:52.177467546Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
Nov 01 10:53:52 functional-269105 containerd[3607]: time="2025-11-01T10:53:52.287931259Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
Nov 01 10:53:52 functional-269105 containerd[3607]: time="2025-11-01T10:53:52.569837728Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Nov 01 10:53:52 functional-269105 containerd[3607]: time="2025-11-01T10:53:52.569950069Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11047"
Nov 01 10:54:42 functional-269105 containerd[3607]: time="2025-11-01T10:54:42.177869821Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
Nov 01 10:54:42 functional-269105 containerd[3607]: time="2025-11-01T10:54:42.181552546Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
Nov 01 10:54:42 functional-269105 containerd[3607]: time="2025-11-01T10:54:42.311725009Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
Nov 01 10:54:42 functional-269105 containerd[3607]: time="2025-11-01T10:54:42.612548806Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Nov 01 10:54:42 functional-269105 containerd[3607]: time="2025-11-01T10:54:42.612593605Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11015"
Nov 01 10:54:42 functional-269105 containerd[3607]: time="2025-11-01T10:54:42.613940832Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
Nov 01 10:54:42 functional-269105 containerd[3607]: time="2025-11-01T10:54:42.616281088Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
Nov 01 10:54:42 functional-269105 containerd[3607]: time="2025-11-01T10:54:42.749275288Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
Nov 01 10:54:43 functional-269105 containerd[3607]: time="2025-11-01T10:54:43.014974292Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Nov 01 10:54:43 functional-269105 containerd[3607]: time="2025-11-01T10:54:43.015036395Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11047"
Nov 01 10:56:07 functional-269105 containerd[3607]: time="2025-11-01T10:56:07.175156238Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
Nov 01 10:56:07 functional-269105 containerd[3607]: time="2025-11-01T10:56:07.177554454Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
Nov 01 10:56:07 functional-269105 containerd[3607]: time="2025-11-01T10:56:07.307896191Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
Nov 01 10:56:07 functional-269105 containerd[3607]: time="2025-11-01T10:56:07.717531547Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Nov 01 10:56:07 functional-269105 containerd[3607]: time="2025-11-01T10:56:07.717641156Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=12709"
Nov 01 10:56:13 functional-269105 containerd[3607]: time="2025-11-01T10:56:13.174916987Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
Nov 01 10:56:13 functional-269105 containerd[3607]: time="2025-11-01T10:56:13.177309378Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
Nov 01 10:56:13 functional-269105 containerd[3607]: time="2025-11-01T10:56:13.316584510Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
Nov 01 10:56:13 functional-269105 containerd[3607]: time="2025-11-01T10:56:13.618061555Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Nov 01 10:56:13 functional-269105 containerd[3607]: time="2025-11-01T10:56:13.618172682Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11047"
==> coredns [03b2b4b635c97f892ebb1f38bf0ad8ae8742a295ad6f2d4509118b91d8482940] <==
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
CoreDNS-1.12.1
linux/arm64, go1.24.1, 707c7c1
[INFO] 127.0.0.1:33051 - 55519 "HINFO IN 7014377530121784897.6476830027086058127. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021658161s
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
==> coredns [8e8aa255cf0060a99c2e47d38779c19dd5993322788ecf71fee0779181966448] <==
maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
.:53
[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
CoreDNS-1.12.1
linux/arm64, go1.24.1, 707c7c1
[INFO] 127.0.0.1:37081 - 27007 "HINFO IN 6597586311678208152.2332564119826185821. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.028154742s
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
==> describe nodes <==
Name: functional-269105
Roles: control-plane
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/arch=arm64
kubernetes.io/hostname=functional-269105
kubernetes.io/os=linux
minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
minikube.k8s.io/name=functional-269105
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_11_01T10_51_11_0700
minikube.k8s.io/version=v1.37.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sat, 01 Nov 2025 10:51:07 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: functional-269105
AcquireTime: <unset>
RenewTime: Sat, 01 Nov 2025 10:58:10 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Sat, 01 Nov 2025 10:57:51 +0000 Sat, 01 Nov 2025 10:51:04 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sat, 01 Nov 2025 10:57:51 +0000 Sat, 01 Nov 2025 10:51:04 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sat, 01 Nov 2025 10:57:51 +0000 Sat, 01 Nov 2025 10:51:04 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sat, 01 Nov 2025 10:57:51 +0000 Sat, 01 Nov 2025 10:51:26 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.49.2
Hostname: functional-269105
Capacity:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022304Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022304Ki
pods: 110
System Info:
Machine ID: ef38fbc8889a0e5f09e9dc0868f5cd19
System UUID: f6014abf-b080-49ee-aa0d-a14a82ee2829
Boot ID: eebecd53-57fd-46e5-aa39-103fca906436
Kernel Version: 5.15.0-1084-aws
OS Image: Debian GNU/Linux 12 (bookworm)
Operating System: linux
Architecture: arm64
Container Runtime Version: containerd://1.7.28
Kubelet Version: v1.34.1
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (14 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default hello-node-75c85bcc94-r6zbp 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m13s
default hello-node-connect-7d85dfc575-nfspr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m22s
default nginx-svc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m31s
default sp-pod 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m13s
kube-system coredns-66bc5c9577-crvrc 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 6m56s
kube-system etcd-functional-269105 100m (5%) 0 (0%) 100Mi (1%) 0 (0%) 7m1s
kube-system kindnet-fz7g5 100m (5%) 100m (5%) 50Mi (0%) 50Mi (0%) 6m56s
kube-system kube-apiserver-functional-269105 250m (12%) 0 (0%) 0 (0%) 0 (0%) 5m57s
kube-system kube-controller-manager-functional-269105 200m (10%) 0 (0%) 0 (0%) 0 (0%) 7m1s
kube-system kube-proxy-mwwf8 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m56s
kube-system kube-scheduler-functional-269105 100m (5%) 0 (0%) 0 (0%) 0 (0%) 7m2s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m55s
kubernetes-dashboard dashboard-metrics-scraper-77bf4d6c4c-cjngd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m
kubernetes-dashboard kubernetes-dashboard-855c9754f9-bcspl 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%) 100m (5%)
memory 220Mi (2%) 220Mi (2%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
hugepages-32Mi 0 (0%) 0 (0%)
hugepages-64Ki 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 6m54s kube-proxy
Normal Starting 5m56s kube-proxy
Normal Starting 7m9s kubelet Starting kubelet.
Warning CgroupV1 7m9s kubelet cgroup v1 support is in maintenance mode, please migrate to cgroup v2
Normal NodeHasSufficientMemory 7m9s (x8 over 7m9s) kubelet Node functional-269105 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 7m9s (x8 over 7m9s) kubelet Node functional-269105 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 7m9s (x7 over 7m9s) kubelet Node functional-269105 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 7m9s kubelet Updated Node Allocatable limit across pods
Normal NodeAllocatableEnforced 7m1s kubelet Updated Node Allocatable limit across pods
Warning CgroupV1 7m1s kubelet cgroup v1 support is in maintenance mode, please migrate to cgroup v2
Normal NodeHasSufficientMemory 7m1s kubelet Node functional-269105 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 7m1s kubelet Node functional-269105 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 7m1s kubelet Node functional-269105 status is now: NodeHasSufficientPID
Normal Starting 7m1s kubelet Starting kubelet.
Normal RegisteredNode 6m57s node-controller Node functional-269105 event: Registered Node functional-269105 in Controller
Normal NodeReady 6m45s kubelet Node functional-269105 status is now: NodeReady
Normal Starting 6m1s kubelet Starting kubelet.
Warning CgroupV1 6m1s kubelet cgroup v1 support is in maintenance mode, please migrate to cgroup v2
Normal NodeHasSufficientMemory 6m1s (x8 over 6m1s) kubelet Node functional-269105 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 6m1s (x8 over 6m1s) kubelet Node functional-269105 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 6m1s (x7 over 6m1s) kubelet Node functional-269105 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 6m1s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 5m54s node-controller Node functional-269105 event: Registered Node functional-269105 in Controller
==> dmesg <==
[Nov 1 09:26] overlayfs: idmapped layers are currently not supported
[ +0.217637] overlayfs: idmapped layers are currently not supported
[ +42.063471] overlayfs: idmapped layers are currently not supported
[Nov 1 09:28] overlayfs: idmapped layers are currently not supported
[Nov 1 09:29] overlayfs: idmapped layers are currently not supported
[Nov 1 09:30] overlayfs: idmapped layers are currently not supported
[ +22.794250] overlayfs: idmapped layers are currently not supported
[Nov 1 09:31] overlayfs: idmapped layers are currently not supported
[Nov 1 09:32] overlayfs: idmapped layers are currently not supported
[Nov 1 09:33] overlayfs: idmapped layers are currently not supported
[ +18.806441] overlayfs: idmapped layers are currently not supported
[Nov 1 09:34] overlayfs: idmapped layers are currently not supported
[ +47.017810] overlayfs: idmapped layers are currently not supported
[Nov 1 09:35] overlayfs: idmapped layers are currently not supported
[Nov 1 09:36] overlayfs: idmapped layers are currently not supported
[Nov 1 09:37] overlayfs: idmapped layers are currently not supported
[Nov 1 09:38] overlayfs: idmapped layers are currently not supported
[Nov 1 09:39] overlayfs: idmapped layers are currently not supported
[Nov 1 09:40] overlayfs: idmapped layers are currently not supported
[Nov 1 09:42] kauditd_printk_skb: 8 callbacks suppressed
[Nov 1 10:42] kauditd_printk_skb: 8 callbacks suppressed
==> etcd [365bcd5f34ee351f9166d5c6e420daa4dc4c09cea3b62f698486c9b8d7beace5] <==
{"level":"warn","ts":"2025-11-01T10:52:12.645395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43202","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-01T10:52:12.660361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43222","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-01T10:52:12.676630Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43252","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-01T10:52:12.694103Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43264","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-01T10:52:12.717242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43274","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-01T10:52:12.733672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43288","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-01T10:52:12.747768Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43316","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-01T10:52:12.763710Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43344","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-01T10:52:12.779378Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43364","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-01T10:52:12.802787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43384","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-01T10:52:12.815632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43400","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-01T10:52:12.831962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43416","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-01T10:52:12.844904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43434","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-01T10:52:12.859904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43444","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-01T10:52:12.876214Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43450","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-01T10:52:12.892698Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43470","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-01T10:52:12.912691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43488","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-01T10:52:12.924929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43508","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-01T10:52:12.949239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43520","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-01T10:52:12.956919Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43538","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-01T10:52:12.977665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43558","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-01T10:52:13.005423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43572","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-01T10:52:13.019111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43596","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-01T10:52:13.034312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43618","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-01T10:52:13.103882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43646","server-name":"","error":"EOF"}
==> etcd [c0dd00cac2ad07744f2cb1c2bdced881cf3e162716ee5c949fb36b9c6d2896eb] <==
{"level":"warn","ts":"2025-11-01T10:51:06.152321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42806","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-01T10:51:06.182625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42824","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-01T10:51:06.236865Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42842","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-01T10:51:06.273523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42866","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-01T10:51:06.297402Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42888","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-01T10:51:06.324402Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42912","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-11-01T10:51:06.442777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42926","server-name":"","error":"EOF"}
{"level":"info","ts":"2025-11-01T10:52:06.238841Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2025-11-01T10:52:06.238905Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-269105","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
{"level":"error","ts":"2025-11-01T10:52:06.239011Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
{"level":"error","ts":"2025-11-01T10:52:06.240533Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
{"level":"error","ts":"2025-11-01T10:52:06.241966Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"info","ts":"2025-11-01T10:52:06.242016Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
{"level":"warn","ts":"2025-11-01T10:52:06.242030Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"warn","ts":"2025-11-01T10:52:06.242079Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"info","ts":"2025-11-01T10:52:06.242081Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
{"level":"error","ts":"2025-11-01T10:52:06.242088Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"info","ts":"2025-11-01T10:52:06.242094Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
{"level":"warn","ts":"2025-11-01T10:52:06.242154Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
{"level":"warn","ts":"2025-11-01T10:52:06.242164Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
{"level":"error","ts":"2025-11-01T10:52:06.242171Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"info","ts":"2025-11-01T10:52:06.245367Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
{"level":"error","ts":"2025-11-01T10:52:06.245444Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"info","ts":"2025-11-01T10:52:06.245478Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2025-11-01T10:52:06.245489Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-269105","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
==> kernel <==
10:58:11 up 19:40, 0 user, load average: 0.35, 1.10, 2.38
Linux functional-269105 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
==> kindnet [2ebe0ddeb3952a0f5601697fe61194fc062cb227c6382ea92df14f78f7317c45] <==
I1101 10:56:07.249711 1 main.go:301] handling current node
I1101 10:56:17.249658 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1101 10:56:17.249890 1 main.go:301] handling current node
I1101 10:56:27.249628 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1101 10:56:27.249661 1 main.go:301] handling current node
I1101 10:56:37.249311 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1101 10:56:37.249345 1 main.go:301] handling current node
I1101 10:56:47.249682 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1101 10:56:47.249719 1 main.go:301] handling current node
I1101 10:56:57.252064 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1101 10:56:57.252161 1 main.go:301] handling current node
I1101 10:57:07.251534 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1101 10:57:07.251567 1 main.go:301] handling current node
I1101 10:57:17.249957 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1101 10:57:17.249996 1 main.go:301] handling current node
I1101 10:57:27.250513 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1101 10:57:27.250762 1 main.go:301] handling current node
I1101 10:57:37.251948 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1101 10:57:37.252089 1 main.go:301] handling current node
I1101 10:57:47.250987 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1101 10:57:47.251084 1 main.go:301] handling current node
I1101 10:57:57.252873 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1101 10:57:57.253110 1 main.go:301] handling current node
I1101 10:58:07.251906 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1101 10:58:07.251943 1 main.go:301] handling current node
==> kindnet [adf77118d73f21068b5b815694e02e44b813306fb85927952fbc2cec23152555] <==
I1101 10:51:16.659316 1 main.go:109] connected to apiserver: https://10.96.0.1:443
I1101 10:51:16.659700 1 main.go:139] hostIP = 192.168.49.2
podIP = 192.168.49.2
I1101 10:51:16.659875 1 main.go:148] setting mtu 1500 for CNI
I1101 10:51:16.659889 1 main.go:178] kindnetd IP family: "ipv4"
I1101 10:51:16.659904 1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
time="2025-11-01T10:51:16Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
I1101 10:51:16.864970 1 controller.go:377] "Starting controller" name="kube-network-policies"
I1101 10:51:16.865079 1 controller.go:381] "Waiting for informer caches to sync"
I1101 10:51:16.865145 1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
I1101 10:51:16.865410 1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
I1101 10:51:17.148740 1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
I1101 10:51:17.148770 1 metrics.go:72] Registering metrics
I1101 10:51:17.148821 1 controller.go:711] "Syncing nftables rules"
I1101 10:51:26.863441 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1101 10:51:26.863497 1 main.go:301] handling current node
I1101 10:51:36.866703 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1101 10:51:36.866738 1 main.go:301] handling current node
I1101 10:51:46.864936 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1101 10:51:46.864975 1 main.go:301] handling current node
==> kube-apiserver [e51a5e843eeba1270878d73f1eec896c3fe87319d27a3d712d4daa31006c64e2] <==
I1101 10:52:13.865901 1 cache.go:32] Waiting for caches to sync for autoregister controller
I1101 10:52:13.865908 1 cache.go:39] Caches are synced for autoregister controller
I1101 10:52:13.866056 1 handler_discovery.go:451] Starting ResourceDiscoveryManager
I1101 10:52:13.871957 1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
I1101 10:52:13.879157 1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
I1101 10:52:13.879247 1 policy_source.go:240] refreshing policies
I1101 10:52:13.930686 1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
I1101 10:52:14.260379 1 controller.go:667] quota admission added evaluator for: serviceaccounts
I1101 10:52:14.624761 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
W1101 10:52:14.955386 1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
I1101 10:52:14.956965 1 controller.go:667] quota admission added evaluator for: endpoints
I1101 10:52:14.962709 1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
I1101 10:52:15.712465 1 controller.go:667] quota admission added evaluator for: daemonsets.apps
I1101 10:52:15.868962 1 controller.go:667] quota admission added evaluator for: deployments.apps
I1101 10:52:15.961004 1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1101 10:52:15.968543 1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I1101 10:52:17.564587 1 controller.go:667] quota admission added evaluator for: replicasets.apps
I1101 10:52:33.731184 1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.96.50.1"}
I1101 10:52:40.592559 1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.101.197.7"}
I1101 10:52:49.333102 1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.108.248.51"}
I1101 10:52:59.021077 1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.105.244.249"}
E1101 10:53:04.810817 1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:36702: use of closed network connection
I1101 10:53:11.461222 1 controller.go:667] quota admission added evaluator for: namespaces
I1101 10:53:11.726369 1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.88.212"}
I1101 10:53:11.749500 1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.213.84"}
==> kube-controller-manager [6b6a8c316335bb5ca5f7ee0a54417b6beafa45ff0afc2bca844650a9216fae0a] <==
I1101 10:51:58.614810 1 serving.go:386] Generated self-signed cert in-memory
I1101 10:51:59.501595 1 controllermanager.go:191] "Starting" version="v1.34.1"
I1101 10:51:59.501623 1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1101 10:51:59.503448 1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
I1101 10:51:59.503945 1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
I1101 10:51:59.504023 1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I1101 10:51:59.504132 1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
E1101 10:52:09.505223 1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
==> kube-controller-manager [fa15b362c02f8caa5d7f3bac3d179d62ea47d6a52a47e7a6ccf0d55e83580696] <==
I1101 10:52:17.184067 1 shared_informer.go:356] "Caches are synced" controller="PV protection"
I1101 10:52:17.186517 1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
I1101 10:52:17.187666 1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
I1101 10:52:17.190627 1 shared_informer.go:356] "Caches are synced" controller="deployment"
I1101 10:52:17.193825 1 shared_informer.go:356] "Caches are synced" controller="stateful set"
I1101 10:52:17.196035 1 shared_informer.go:356] "Caches are synced" controller="cronjob"
I1101 10:52:17.199250 1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
I1101 10:52:17.202636 1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
I1101 10:52:17.205208 1 shared_informer.go:356] "Caches are synced" controller="expand"
I1101 10:52:17.205420 1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
I1101 10:52:17.206789 1 shared_informer.go:356] "Caches are synced" controller="disruption"
I1101 10:52:17.206978 1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
I1101 10:52:17.207099 1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
I1101 10:52:17.207143 1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
I1101 10:52:17.210464 1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
I1101 10:52:17.219968 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
I1101 10:52:17.234116 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
I1101 10:52:17.234140 1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
I1101 10:52:17.234148 1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
E1101 10:53:11.568407 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1101 10:53:11.570135 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1101 10:53:11.589267 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1101 10:53:11.589571 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1101 10:53:11.598266 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1101 10:53:11.602529 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
==> kube-proxy [14ba992862f8d7c1f6164f45ff4c5cccd522cf9a0ea0c5366eb404e264b85486] <==
I1101 10:51:59.666375 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
E1101 10:51:59.667366 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-269105&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1101 10:52:00.998574 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-269105&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1101 10:52:02.611496 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-269105&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1101 10:52:07.585461 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-269105&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
I1101 10:52:15.166775 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1101 10:52:15.166817 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
E1101 10:52:15.167020 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1101 10:52:15.188281 1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I1101 10:52:15.188346 1 server_linux.go:132] "Using iptables Proxier"
I1101 10:52:15.192755 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1101 10:52:15.193319 1 server.go:527] "Version info" version="v1.34.1"
I1101 10:52:15.193358 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1101 10:52:15.196122 1 config.go:106] "Starting endpoint slice config controller"
I1101 10:52:15.196269 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1101 10:52:15.196657 1 config.go:200] "Starting service config controller"
I1101 10:52:15.196764 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1101 10:52:15.197129 1 config.go:403] "Starting serviceCIDR config controller"
I1101 10:52:15.197190 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1101 10:52:15.197699 1 config.go:309] "Starting node config controller"
I1101 10:52:15.197762 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1101 10:52:15.197814 1 shared_informer.go:356] "Caches are synced" controller="node config"
I1101 10:52:15.297002 1 shared_informer.go:356] "Caches are synced" controller="service config"
I1101 10:52:15.297073 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
I1101 10:52:15.297332 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
==> kube-proxy [baed3d2fb8f06f29a4fb89c40452ef21b9d69908da32eda6c00e74855e85bcf2] <==
I1101 10:51:16.482515 1 server_linux.go:53] "Using iptables proxy"
I1101 10:51:16.607250 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I1101 10:51:16.709879 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1101 10:51:16.710120 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
E1101 10:51:16.710327 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1101 10:51:16.741290 1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I1101 10:51:16.741515 1 server_linux.go:132] "Using iptables Proxier"
I1101 10:51:16.760080 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1101 10:51:16.760415 1 server.go:527] "Version info" version="v1.34.1"
I1101 10:51:16.760439 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1101 10:51:16.761885 1 config.go:200] "Starting service config controller"
I1101 10:51:16.761901 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1101 10:51:16.761920 1 config.go:106] "Starting endpoint slice config controller"
I1101 10:51:16.761924 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1101 10:51:16.761937 1 config.go:403] "Starting serviceCIDR config controller"
I1101 10:51:16.761941 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1101 10:51:16.762634 1 config.go:309] "Starting node config controller"
I1101 10:51:16.762648 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1101 10:51:16.762654 1 shared_informer.go:356] "Caches are synced" controller="node config"
I1101 10:51:16.862927 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
I1101 10:51:16.863027 1 shared_informer.go:356] "Caches are synced" controller="service config"
I1101 10:51:16.863052 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
==> kube-scheduler [60cd23f46599220cd3ae9cf8e8c43ee41efa10f43df75b554890316fd0090f27] <==
E1101 10:51:08.377524 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
E1101 10:51:08.378237 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1101 10:51:08.378447 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1101 10:51:08.378670 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1101 10:51:08.378892 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1101 10:51:08.379145 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
E1101 10:51:08.379372 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
E1101 10:51:08.379709 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1101 10:51:08.379990 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1101 10:51:08.380198 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E1101 10:51:08.380392 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E1101 10:51:08.380585 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
E1101 10:51:08.380759 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1101 10:51:08.384441 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
E1101 10:51:08.384521 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
E1101 10:51:08.384726 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
E1101 10:51:08.384806 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
E1101 10:51:08.385657 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
I1101 10:51:09.666830 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1101 10:51:56.108553 1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
I1101 10:51:56.108589 1 server.go:263] "[graceful-termination] secure server has stopped listening"
I1101 10:51:56.108612 1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
I1101 10:51:56.108656 1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1101 10:51:56.108923 1 server.go:265] "[graceful-termination] secure server is exiting"
E1101 10:51:56.108938 1 run.go:72] "command failed" err="finished without leader elect"
==> kube-scheduler [e27b4d294d5a2f75c8ca910bc1e0ffa8be05b4fae67fa8834535b131fc0c1873] <==
E1101 10:52:03.387752 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
E1101 10:52:03.398994 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
E1101 10:52:03.704914 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
E1101 10:52:03.754688 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1101 10:52:04.016177 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E1101 10:52:06.478373 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E1101 10:52:06.637771 1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
E1101 10:52:06.758910 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
E1101 10:52:06.838932 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1101 10:52:06.933123 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
E1101 10:52:07.456110 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
E1101 10:52:07.492739 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
E1101 10:52:07.715371 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1101 10:52:07.754106 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E1101 10:52:07.814994 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1101 10:52:07.822770 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
E1101 10:52:07.839521 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1101 10:52:07.945770 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1101 10:52:08.077896 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
E1101 10:52:08.236566 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
E1101 10:52:08.240307 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1101 10:52:08.614919 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1101 10:52:08.676716 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
E1101 10:52:09.044806 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
I1101 10:52:15.998505 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
==> kubelet <==
Nov 01 10:56:07 functional-269105 kubelet[4609]: E1101 10:56:07.717892 4609 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Nov 01 10:56:07 functional-269105 kubelet[4609]: E1101 10:56:07.717969 4609 kuberuntime_manager.go:1449] "Unhandled Error" err="container kubernetes-dashboard start failed in pod kubernetes-dashboard-855c9754f9-bcspl_kubernetes-dashboard(a35d85fa-a948-46ed-9bc5-a3e3dcd9a648): ErrImagePull: failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
Nov 01 10:56:07 functional-269105 kubelet[4609]: E1101 10:56:07.718027 4609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-bcspl" podUID="a35d85fa-a948-46ed-9bc5-a3e3dcd9a648"
Nov 01 10:56:13 functional-269105 kubelet[4609]: E1101 10:56:13.618379 4609 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
Nov 01 10:56:13 functional-269105 kubelet[4609]: E1101 10:56:13.618452 4609 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
Nov 01 10:56:13 functional-269105 kubelet[4609]: E1101 10:56:13.618536 4609 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-77bf4d6c4c-cjngd_kubernetes-dashboard(ea4196d0-795a-4917-87ce-f61ae24a5972): ErrImagePull: failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
Nov 01 10:56:13 functional-269105 kubelet[4609]: E1101 10:56:13.618577 4609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-cjngd" podUID="ea4196d0-795a-4917-87ce-f61ae24a5972"
Nov 01 10:56:21 functional-269105 kubelet[4609]: E1101 10:56:21.174666 4609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-bcspl" podUID="a35d85fa-a948-46ed-9bc5-a3e3dcd9a648"
Nov 01 10:56:26 functional-269105 kubelet[4609]: E1101 10:56:26.175688 4609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-cjngd" podUID="ea4196d0-795a-4917-87ce-f61a
e24a5972"
Nov 01 10:56:36 functional-269105 kubelet[4609]: E1101 10:56:36.175534 4609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-bcspl" podUID="a35d85fa-a948-46ed-9bc5-a3e3dcd9a648"
Nov 01 10:56:38 functional-269105 kubelet[4609]: E1101 10:56:38.174893 4609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-cjngd" podUID="ea4196d0-795a-4917-87ce-f61a
e24a5972"
Nov 01 10:56:49 functional-269105 kubelet[4609]: E1101 10:56:49.174134 4609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-bcspl" podUID="a35d85fa-a948-46ed-9bc5-a3e3dcd9a648"
Nov 01 10:56:53 functional-269105 kubelet[4609]: E1101 10:56:53.175070 4609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-cjngd" podUID="ea4196d0-795a-4917-87ce-f61a
e24a5972"
Nov 01 10:57:01 functional-269105 kubelet[4609]: E1101 10:57:01.175110 4609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-bcspl" podUID="a35d85fa-a948-46ed-9bc5-a3e3dcd9a648"
Nov 01 10:57:04 functional-269105 kubelet[4609]: E1101 10:57:04.177447 4609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-cjngd" podUID="ea4196d0-795a-4917-87ce-f61a
e24a5972"
Nov 01 10:57:14 functional-269105 kubelet[4609]: E1101 10:57:14.175015 4609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-bcspl" podUID="a35d85fa-a948-46ed-9bc5-a3e3dcd9a648"
Nov 01 10:57:17 functional-269105 kubelet[4609]: E1101 10:57:17.174678 4609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-cjngd" podUID="ea4196d0-795a-4917-87ce-f61a
e24a5972"
Nov 01 10:57:26 functional-269105 kubelet[4609]: E1101 10:57:26.174693 4609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-bcspl" podUID="a35d85fa-a948-46ed-9bc5-a3e3dcd9a648"
Nov 01 10:57:29 functional-269105 kubelet[4609]: E1101 10:57:29.174724 4609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-cjngd" podUID="ea4196d0-795a-4917-87ce-f61a
e24a5972"
Nov 01 10:57:37 functional-269105 kubelet[4609]: E1101 10:57:37.174448 4609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-bcspl" podUID="a35d85fa-a948-46ed-9bc5-a3e3dcd9a648"
Nov 01 10:57:44 functional-269105 kubelet[4609]: E1101 10:57:44.175231 4609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-cjngd" podUID="ea4196d0-795a-4917-87ce-f61a
e24a5972"
Nov 01 10:57:48 functional-269105 kubelet[4609]: E1101 10:57:48.175229 4609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-bcspl" podUID="a35d85fa-a948-46ed-9bc5-a3e3dcd9a648"
Nov 01 10:57:55 functional-269105 kubelet[4609]: E1101 10:57:55.175741 4609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-cjngd" podUID="ea4196d0-795a-4917-87ce-f61a
e24a5972"
Nov 01 10:58:00 functional-269105 kubelet[4609]: E1101 10:58:00.177728 4609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-bcspl" podUID="a35d85fa-a948-46ed-9bc5-a3e3dcd9a648"
Nov 01 10:58:07 functional-269105 kubelet[4609]: E1101 10:58:07.174443 4609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-cjngd" podUID="ea4196d0-795a-4917-87ce-f61a
e24a5972"
==> storage-provisioner [5fe425c496959a4e66b47431be04adee713124560f22706c4d211802311c377d] <==
W1101 10:57:47.410151 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 10:57:49.412811 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 10:57:49.419175 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 10:57:51.422141 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 10:57:51.428625 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 10:57:53.431498 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 10:57:53.435943 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 10:57:55.438729 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 10:57:55.443286 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 10:57:57.446496 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 10:57:57.451339 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 10:57:59.455155 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 10:57:59.462431 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 10:58:01.465119 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 10:58:01.471799 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 10:58:03.474906 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 10:58:03.479549 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 10:58:05.482671 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 10:58:05.489213 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 10:58:07.492863 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 10:58:07.497481 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 10:58:09.501559 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 10:58:09.508463 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 10:58:11.511758 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1101 10:58:11.516933 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
==> storage-provisioner [91c3f7ce6c557d31d2556cad6567e96e0a92b868430e3e0debaf01906bb9de59] <==
I1101 10:51:59.457289 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F1101 10:51:59.460650 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
-- /stdout --
helpers_test.go:262: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-269105 -n functional-269105
helpers_test.go:269: (dbg) Run: kubectl --context functional-269105 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount dashboard-metrics-scraper-77bf4d6c4c-cjngd kubernetes-dashboard-855c9754f9-bcspl
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run: kubectl --context functional-269105 describe pod busybox-mount dashboard-metrics-scraper-77bf4d6c4c-cjngd kubernetes-dashboard-855c9754f9-bcspl
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-269105 describe pod busybox-mount dashboard-metrics-scraper-77bf4d6c4c-cjngd kubernetes-dashboard-855c9754f9-bcspl: exit status 1 (87.807233ms)
-- stdout --
Name: busybox-mount
Namespace: default
Priority: 0
Service Account: default
Node: functional-269105/192.168.49.2
Start Time: Sat, 01 Nov 2025 10:53:08 +0000
Labels: integration-test=busybox-mount
Annotations: <none>
Status: Succeeded
IP: 10.244.0.9
IPs:
IP: 10.244.0.9
Containers:
mount-munger:
Container ID: containerd://c5ce950fccd378e7cf73fd7f99f12ab3311e639ce1ab582b327db40c7354bcd0
Image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID: gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
Port: <none>
Host Port: <none>
Command:
/bin/sh
-c
--
Args:
cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
State: Terminated
Reason: Completed
Exit Code: 0
Started: Sat, 01 Nov 2025 10:53:11 +0000
Finished: Sat, 01 Nov 2025 10:53:11 +0000
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/mount-9p from test-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qpt57 (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
test-volume:
Type: HostPath (bare host directory volume)
Path: /mount-9p
HostPathType:
kube-api-access-qpt57:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m4s default-scheduler Successfully assigned default/busybox-mount to functional-269105
Normal Pulling 5m4s kubelet Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Normal Pulled 5m2s kubelet Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.221s (2.221s including waiting). Image size: 1935750 bytes.
Normal Created 5m2s kubelet Created container: mount-munger
Normal Started 5m1s kubelet Started container mount-munger
-- /stdout --
** stderr **
Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-cjngd" not found
Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-bcspl" not found
** /stderr **
helpers_test.go:287: kubectl --context functional-269105 describe pod busybox-mount dashboard-metrics-scraper-77bf4d6c4c-cjngd kubernetes-dashboard-855c9754f9-bcspl: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (302.51s)