=== RUN TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd
=== CONT TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-031973 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-031973 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-031973 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-031973 --alsologtostderr -v=1] stderr:
I1202 15:17:28.905682 449241 out.go:360] Setting OutFile to fd 1 ...
I1202 15:17:28.905795 449241 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:17:28.905803 449241 out.go:374] Setting ErrFile to fd 2...
I1202 15:17:28.905808 449241 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:17:28.906038 449241 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-403182/.minikube/bin
I1202 15:17:28.906781 449241 mustload.go:66] Loading cluster: functional-031973
I1202 15:17:28.907856 449241 config.go:182] Loaded profile config "functional-031973": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1202 15:17:28.908295 449241 cli_runner.go:164] Run: docker container inspect functional-031973 --format={{.State.Status}}
I1202 15:17:28.929445 449241 host.go:66] Checking if "functional-031973" exists ...
I1202 15:17:28.929844 449241 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1202 15:17:29.002190 449241 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-02 15:17:28.99032476 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1202 15:17:29.002355 449241 api_server.go:166] Checking apiserver status ...
I1202 15:17:29.002409 449241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1202 15:17:29.002452 449241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-031973
I1202 15:17:29.027799 449241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/22021-403182/.minikube/machines/functional-031973/id_rsa Username:docker}
I1202 15:17:29.142743 449241 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4930/cgroup
W1202 15:17:29.152907 449241 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4930/cgroup: Process exited with status 1
stdout:
stderr:
I1202 15:17:29.152966 449241 ssh_runner.go:195] Run: ls
I1202 15:17:29.157784 449241 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I1202 15:17:29.163055 449241 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W1202 15:17:29.163128 449241 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1202 15:17:29.163338 449241 config.go:182] Loaded profile config "functional-031973": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1202 15:17:29.163371 449241 addons.go:70] Setting dashboard=true in profile "functional-031973"
I1202 15:17:29.163385 449241 addons.go:239] Setting addon dashboard=true in "functional-031973"
I1202 15:17:29.163432 449241 host.go:66] Checking if "functional-031973" exists ...
I1202 15:17:29.163987 449241 cli_runner.go:164] Run: docker container inspect functional-031973 --format={{.State.Status}}
I1202 15:17:29.191822 449241 out.go:179] - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1202 15:17:29.193382 449241 out.go:179] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1202 15:17:29.194639 449241 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1202 15:17:29.194676 449241 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1202 15:17:29.194752 449241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-031973
I1202 15:17:29.216777 449241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/22021-403182/.minikube/machines/functional-031973/id_rsa Username:docker}
I1202 15:17:29.326344 449241 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1202 15:17:29.326369 449241 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1202 15:17:29.340554 449241 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1202 15:17:29.340577 449241 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1202 15:17:29.355338 449241 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1202 15:17:29.355372 449241 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1202 15:17:29.372713 449241 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1202 15:17:29.372743 449241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1202 15:17:29.388437 449241 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
I1202 15:17:29.388466 449241 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1202 15:17:29.402524 449241 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1202 15:17:29.402550 449241 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1202 15:17:29.417744 449241 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1202 15:17:29.417763 449241 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1202 15:17:29.433828 449241 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1202 15:17:29.433857 449241 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1202 15:17:29.450543 449241 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1202 15:17:29.450604 449241 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1202 15:17:29.468021 449241 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1202 15:17:29.981705 449241 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p functional-031973 addons enable metrics-server
I1202 15:17:29.983245 449241 addons.go:202] Writing out "functional-031973" config to set dashboard=true...
W1202 15:17:29.983547 449241 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1202 15:17:29.984684 449241 kapi.go:59] client config for functional-031973: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22021-403182/.minikube/profiles/functional-031973/client.crt", KeyFile:"/home/jenkins/minikube-integration/22021-403182/.minikube/profiles/functional-031973/client.key", CAFile:"/home/jenkins/minikube-integration/22021-403182/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2815480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1202 15:17:29.985278 449241 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1202 15:17:29.985297 449241 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1202 15:17:29.985306 449241 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1202 15:17:29.985313 449241 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1202 15:17:29.985321 449241 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1202 15:17:29.992955 449241 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard kubernetes-dashboard 1e14a1c1-d880-41a8-b1ba-f5ce8d369fac 767 0 2025-12-02 15:17:29 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-12-02 15:17:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.110.153.94,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.110.153.94],IPFamilies:[IPv4],AllocateLoadBalance
rNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1202 15:17:29.993115 449241 out.go:285] * Launching proxy ...
* Launching proxy ...
I1202 15:17:29.993172 449241 dashboard.go:154] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-031973 proxy --port 36195]
I1202 15:17:29.993448 449241 dashboard.go:159] Waiting for kubectl to output host:port ...
I1202 15:17:30.043657 449241 dashboard.go:177] proxy stdout: Starting to serve on 127.0.0.1:36195
W1202 15:17:30.043741 449241 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I1202 15:17:30.052528 449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9d3b8a70-9f40-4114-9e46-124c833da3a0] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:17:30 GMT]] Body:0xc0005d5c00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000208280 TLS:<nil>}
I1202 15:17:30.052614 449241 retry.go:31] will retry after 73.432µs: Temporary Error: unexpected response code: 503
I1202 15:17:30.056224 449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[bfe4631f-7513-48eb-b214-e329834df876] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:17:30 GMT]] Body:0xc0005d5cc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000208780 TLS:<nil>}
I1202 15:17:30.056279 449241 retry.go:31] will retry after 132.568µs: Temporary Error: unexpected response code: 503
I1202 15:17:30.059849 449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[077691fa-1c01-4734-80e1-f3b6f3d29357] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:17:30 GMT]] Body:0xc00072f140 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000208b40 TLS:<nil>}
I1202 15:17:30.059899 449241 retry.go:31] will retry after 212.877µs: Temporary Error: unexpected response code: 503
I1202 15:17:30.063817 449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a2eb0c66-904c-4a9c-ae5b-c09bb35dfac9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:17:30 GMT]] Body:0xc0005d5dc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00042da40 TLS:<nil>}
I1202 15:17:30.063872 449241 retry.go:31] will retry after 373.263µs: Temporary Error: unexpected response code: 503
I1202 15:17:30.067193 449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3bf6b005-dcf4-417d-808b-a658d498b633] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:17:30 GMT]] Body:0xc00072f280 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000208c80 TLS:<nil>}
I1202 15:17:30.067234 449241 retry.go:31] will retry after 515.042µs: Temporary Error: unexpected response code: 503
I1202 15:17:30.070587 449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[35470e3a-eb6e-4559-abc6-814ee2aa5291] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:17:30 GMT]] Body:0xc0005d5ec0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00042db80 TLS:<nil>}
I1202 15:17:30.070645 449241 retry.go:31] will retry after 880.824µs: Temporary Error: unexpected response code: 503
I1202 15:17:30.074024 449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4fc111e3-5ee6-49e2-b603-14367dd553b9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:17:30 GMT]] Body:0xc0008a0040 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000208f00 TLS:<nil>}
I1202 15:17:30.074076 449241 retry.go:31] will retry after 1.647039ms: Temporary Error: unexpected response code: 503
I1202 15:17:30.078447 449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5d35d512-638f-4fa5-96a2-8a915564036f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:17:30 GMT]] Body:0xc00067fd80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00042dcc0 TLS:<nil>}
I1202 15:17:30.078491 449241 retry.go:31] will retry after 1.402606ms: Temporary Error: unexpected response code: 503
I1202 15:17:30.082724 449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[977a8e7d-6149-458c-b5a0-892dc17aa9ac] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:17:30 GMT]] Body:0xc00072f440 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00036a8c0 TLS:<nil>}
I1202 15:17:30.082774 449241 retry.go:31] will retry after 2.494949ms: Temporary Error: unexpected response code: 503
I1202 15:17:30.088700 449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[24e871f5-4af5-4352-bccf-5e2f8b0855c4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:17:30 GMT]] Body:0xc0008a0200 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00042de00 TLS:<nil>}
I1202 15:17:30.088755 449241 retry.go:31] will retry after 4.036144ms: Temporary Error: unexpected response code: 503
I1202 15:17:30.095228 449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a5037d85-70ac-415a-bc7e-61f6312e8b5d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:17:30 GMT]] Body:0xc0017b0000 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002092c0 TLS:<nil>}
I1202 15:17:30.095296 449241 retry.go:31] will retry after 6.549807ms: Temporary Error: unexpected response code: 503
I1202 15:17:30.105642 449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b15402ed-a348-40a2-b75c-3ca7946409c1] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:17:30 GMT]] Body:0xc00072f540 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00036aa00 TLS:<nil>}
I1202 15:17:30.105727 449241 retry.go:31] will retry after 10.104639ms: Temporary Error: unexpected response code: 503
I1202 15:17:30.118947 449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[851da338-8e8d-4d95-9b6b-41eaa2ce13e3] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:17:30 GMT]] Body:0xc0017b0100 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000150000 TLS:<nil>}
I1202 15:17:30.119024 449241 retry.go:31] will retry after 7.003683ms: Temporary Error: unexpected response code: 503
I1202 15:17:30.129751 449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[bd2d07b1-e22e-4b5e-872a-7f287f015a95] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:17:30 GMT]] Body:0xc0008a0a40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00036ab40 TLS:<nil>}
I1202 15:17:30.129846 449241 retry.go:31] will retry after 16.229382ms: Temporary Error: unexpected response code: 503
I1202 15:17:30.150601 449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7edb8c63-233b-4ba4-9623-9cd19e75e229] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:17:30 GMT]] Body:0xc0017b0240 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209400 TLS:<nil>}
I1202 15:17:30.150698 449241 retry.go:31] will retry after 40.097973ms: Temporary Error: unexpected response code: 503
I1202 15:17:30.194791 449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[29f7af1a-9606-4934-ae62-8382e622ecf6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:17:30 GMT]] Body:0xc0017b0300 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00036ac80 TLS:<nil>}
I1202 15:17:30.194897 449241 retry.go:31] will retry after 42.297381ms: Temporary Error: unexpected response code: 503
I1202 15:17:30.240884 449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3521db4d-1515-46ba-a649-b17b7df5be48] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:17:30 GMT]] Body:0xc0017b03c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00036adc0 TLS:<nil>}
I1202 15:17:30.240944 449241 retry.go:31] will retry after 55.235791ms: Temporary Error: unexpected response code: 503
I1202 15:17:30.300512 449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a96b9e96-94d3-4525-af97-06c4b908892f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:17:30 GMT]] Body:0xc00072f780 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00036b180 TLS:<nil>}
I1202 15:17:30.300598 449241 retry.go:31] will retry after 141.233319ms: Temporary Error: unexpected response code: 503
I1202 15:17:30.446136 449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7e0cd879-9e1a-4d4b-8ff2-38cb9ef3cffc] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:17:30 GMT]] Body:0xc0008a0b40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000150140 TLS:<nil>}
I1202 15:17:30.446216 449241 retry.go:31] will retry after 156.215687ms: Temporary Error: unexpected response code: 503
I1202 15:17:30.605753 449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c4abf687-17a8-47ee-aa7a-94f46764eee0] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:17:30 GMT]] Body:0xc0017b0500 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209540 TLS:<nil>}
I1202 15:17:30.605829 449241 retry.go:31] will retry after 190.736858ms: Temporary Error: unexpected response code: 503
I1202 15:17:30.800823 449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4f28896f-d9b2-463b-af54-6f601de2a15a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:17:30 GMT]] Body:0xc0008a0e00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00036b400 TLS:<nil>}
I1202 15:17:30.800894 449241 retry.go:31] will retry after 437.958501ms: Temporary Error: unexpected response code: 503
I1202 15:17:31.242459 449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[95c47543-9b80-4005-8a92-8f45f94db75c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:17:31 GMT]] Body:0xc0017b0600 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209680 TLS:<nil>}
I1202 15:17:31.242526 449241 retry.go:31] will retry after 375.150197ms: Temporary Error: unexpected response code: 503
I1202 15:17:31.621145 449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ddc65722-9648-458a-b8e6-69d41c8fc8d0] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:17:31 GMT]] Body:0xc0008a1000 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00036bb80 TLS:<nil>}
I1202 15:17:31.621227 449241 retry.go:31] will retry after 704.299178ms: Temporary Error: unexpected response code: 503
I1202 15:17:32.329315 449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8e96001b-c447-493c-bb60-996aa05ae47e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:17:32 GMT]] Body:0xc00072f900 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209900 TLS:<nil>}
I1202 15:17:32.329380 449241 retry.go:31] will retry after 1.523645226s: Temporary Error: unexpected response code: 503
I1202 15:17:33.857380 449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ff4461f9-c04b-4cec-9452-abcd6a0a47ec] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:17:33 GMT]] Body:0xc0008a1080 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000150280 TLS:<nil>}
I1202 15:17:33.857454 449241 retry.go:31] will retry after 1.144679699s: Temporary Error: unexpected response code: 503
I1202 15:17:35.006103 449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8bf1a333-13cb-451e-a605-b0f273790e1b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:17:35 GMT]] Body:0xc0017b06c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001503c0 TLS:<nil>}
I1202 15:17:35.006176 449241 retry.go:31] will retry after 1.557833298s: Temporary Error: unexpected response code: 503
I1202 15:17:36.568061 449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f6c6e525-0f8d-4731-bbeb-bf447044a290] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:17:36 GMT]] Body:0xc0008a1100 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00036bcc0 TLS:<nil>}
I1202 15:17:36.568145 449241 retry.go:31] will retry after 4.329490129s: Temporary Error: unexpected response code: 503
I1202 15:17:40.901530 449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[02125efd-a6c8-40d2-b419-de0c4e77b110] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:17:40 GMT]] Body:0xc0008a1180 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000150500 TLS:<nil>}
I1202 15:17:40.901609 449241 retry.go:31] will retry after 6.789008513s: Temporary Error: unexpected response code: 503
I1202 15:17:47.697496 449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[40710c54-b3b3-443f-9156-75b8f23f1cbe] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:17:47 GMT]] Body:0xc0008a1240 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000150780 TLS:<nil>}
I1202 15:17:47.697560 449241 retry.go:31] will retry after 9.579459528s: Temporary Error: unexpected response code: 503
I1202 15:17:57.282494 449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[695f5e70-92c9-465a-96c1-8d01602de36e] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:17:57 GMT]] Body:0xc000882800 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00036be00 TLS:<nil>}
I1202 15:17:57.282569 449241 retry.go:31] will retry after 8.412852722s: Temporary Error: unexpected response code: 503
I1202 15:18:05.700298 449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a8f54c04-4069-4f7f-ae3d-f96cf64a8ff1] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:18:05 GMT]] Body:0xc0017b07c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209a40 TLS:<nil>}
I1202 15:18:05.700380 449241 retry.go:31] will retry after 23.415372389s: Temporary Error: unexpected response code: 503
I1202 15:18:29.119631 449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9540bc8b-8f77-4b22-8073-25ef8d221927] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:18:29 GMT]] Body:0xc000882880 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d6000 TLS:<nil>}
I1202 15:18:29.119725 449241 retry.go:31] will retry after 34.9920945s: Temporary Error: unexpected response code: 503
I1202 15:19:04.116163 449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d6643a1e-18cc-466e-9c45-8a8d0c9ef0da] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:19:04 GMT]] Body:0xc00072fc40 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001508c0 TLS:<nil>}
I1202 15:19:04.116247 449241 retry.go:31] will retry after 57.738560163s: Temporary Error: unexpected response code: 503
I1202 15:20:01.859443 449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ba3a0010-d9cf-4802-be3a-1f07dccf92e0] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:20:01 GMT]] Body:0xc0008820c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d6140 TLS:<nil>}
I1202 15:20:01.859534 449241 retry.go:31] will retry after 1m10.032263161s: Temporary Error: unexpected response code: 503
I1202 15:21:11.895658 449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a07a97d2-8c9a-4508-9b03-02a0f14781c3] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:21:11 GMT]] Body:0xc000882140 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000150a00 TLS:<nil>}
I1202 15:21:11.895752 449241 retry.go:31] will retry after 1m24.660979926s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run: docker inspect functional-031973
helpers_test.go:243: (dbg) docker inspect functional-031973:
-- stdout --
[
{
"Id": "8e6415af0faf450563a814c4320c99a3fdffa7cb0ee3328d6db07a2fba5353e3",
"Created": "2025-12-02T15:15:37.382465049Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 437199,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-12-02T15:15:37.417630105Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
"ResolvConfPath": "/var/lib/docker/containers/8e6415af0faf450563a814c4320c99a3fdffa7cb0ee3328d6db07a2fba5353e3/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/8e6415af0faf450563a814c4320c99a3fdffa7cb0ee3328d6db07a2fba5353e3/hostname",
"HostsPath": "/var/lib/docker/containers/8e6415af0faf450563a814c4320c99a3fdffa7cb0ee3328d6db07a2fba5353e3/hosts",
"LogPath": "/var/lib/docker/containers/8e6415af0faf450563a814c4320c99a3fdffa7cb0ee3328d6db07a2fba5353e3/8e6415af0faf450563a814c4320c99a3fdffa7cb0ee3328d6db07a2fba5353e3-json.log",
"Name": "/functional-031973",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"functional-031973:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {
"max-size": "100m"
}
},
"NetworkMode": "functional-031973",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "private",
"Dns": null,
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 4294967296,
"NanoCpus": 0,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 8589934592,
"MemorySwappiness": null,
"OomKillDisable": null,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "8e6415af0faf450563a814c4320c99a3fdffa7cb0ee3328d6db07a2fba5353e3",
"LowerDir": "/var/lib/docker/overlay2/ff8e501cc39f97b2264b9620db8c3575efd7e10f0796e3fc558490e7b693b56b-init/diff:/var/lib/docker/overlay2/b24a03799b584404f04c044a7327612eb3ab66b1330d1bf57134456e5f41230d/diff",
"MergedDir": "/var/lib/docker/overlay2/ff8e501cc39f97b2264b9620db8c3575efd7e10f0796e3fc558490e7b693b56b/merged",
"UpperDir": "/var/lib/docker/overlay2/ff8e501cc39f97b2264b9620db8c3575efd7e10f0796e3fc558490e7b693b56b/diff",
"WorkDir": "/var/lib/docker/overlay2/ff8e501cc39f97b2264b9620db8c3575efd7e10f0796e3fc558490e7b693b56b/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "functional-031973",
"Source": "/var/lib/docker/volumes/functional-031973/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "functional-031973",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8441/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "functional-031973",
"name.minikube.sigs.k8s.io": "functional-031973",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"SandboxID": "5c60273079da1f9d4e348ddcae81f0a2346ec733b5680c77eb71ba260385fd94",
"SandboxKey": "/var/run/docker/netns/5c60273079da",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33165"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33166"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33169"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33167"
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33168"
}
]
},
"Networks": {
"functional-031973": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2",
"IPv6Address": ""
},
"Links": null,
"Aliases": null,
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "072e297832857b662108017b58f1caabb1f529b2dbb839e022eeb4c01cc96da4",
"EndpointID": "60b5bde8cb58337b502aeac0f46839fc2f8c145ed5188498e6f8715b9c69a2f9",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"MacAddress": "92:85:11:0a:bb:d6",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"functional-031973",
"8e6415af0faf"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p functional-031973 -n functional-031973
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run: out/minikube-linux-amd64 -p functional-031973 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-031973 logs -n 25: (1.325334381s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs:
-- stdout --
==> Audit <==
┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ image │ functional-031973 image rm kicbase/echo-server:functional-031973 --alsologtostderr │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
│ image │ functional-031973 image ls │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
│ ssh │ functional-031973 ssh findmnt -T /mount1 │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
│ image │ functional-031973 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
│ ssh │ functional-031973 ssh findmnt -T /mount2 │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
│ image │ functional-031973 image ls │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
│ ssh │ functional-031973 ssh findmnt -T /mount3 │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
│ image │ functional-031973 image save --daemon kicbase/echo-server:functional-031973 --alsologtostderr │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
│ mount │ -p functional-031973 --kill=true │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ │
│ cp │ functional-031973 cp testdata/cp-test.txt /home/docker/cp-test.txt │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
│ ssh │ functional-031973 ssh -n functional-031973 sudo cat /home/docker/cp-test.txt │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
│ cp │ functional-031973 cp functional-031973:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2576747610/001/cp-test.txt │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
│ ssh │ functional-031973 ssh -n functional-031973 sudo cat /home/docker/cp-test.txt │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
│ cp │ functional-031973 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
│ ssh │ functional-031973 ssh -n functional-031973 sudo cat /tmp/does/not/exist/cp-test.txt │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
│ update-context │ functional-031973 update-context --alsologtostderr -v=2 │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
│ update-context │ functional-031973 update-context --alsologtostderr -v=2 │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
│ update-context │ functional-031973 update-context --alsologtostderr -v=2 │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
│ image │ functional-031973 image ls --format short --alsologtostderr │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
│ image │ functional-031973 image ls --format yaml --alsologtostderr │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
│ ssh │ functional-031973 ssh pgrep buildkitd │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ │
│ image │ functional-031973 image build -t localhost/my-image:functional-031973 testdata/build --alsologtostderr │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
│ image │ functional-031973 image ls │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
│ image │ functional-031973 image ls --format json --alsologtostderr │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
│ image │ functional-031973 image ls --format table --alsologtostderr │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/12/02 15:17:28
Running on machine: ubuntu-20-agent-6
Binary: Built with gc go1.25.3 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1202 15:17:28.857802 449198 out.go:360] Setting OutFile to fd 1 ...
I1202 15:17:28.858147 449198 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:17:28.858159 449198 out.go:374] Setting ErrFile to fd 2...
I1202 15:17:28.858167 449198 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:17:28.858525 449198 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-403182/.minikube/bin
I1202 15:17:28.859129 449198 out.go:368] Setting JSON to false
I1202 15:17:28.860514 449198 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":7191,"bootTime":1764681458,"procs":245,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1202 15:17:28.860594 449198 start.go:143] virtualization: kvm guest
I1202 15:17:28.862751 449198 out.go:179] * [functional-031973] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
I1202 15:17:28.864828 449198 out.go:179] - MINIKUBE_LOCATION=22021
I1202 15:17:28.864854 449198 notify.go:221] Checking for updates...
I1202 15:17:28.867583 449198 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1202 15:17:28.868765 449198 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22021-403182/kubeconfig
I1202 15:17:28.870531 449198 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-403182/.minikube
I1202 15:17:28.871999 449198 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I1202 15:17:28.873798 449198 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1202 15:17:28.875560 449198 config.go:182] Loaded profile config "functional-031973": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1202 15:17:28.876221 449198 driver.go:422] Setting default libvirt URI to qemu:///system
I1202 15:17:28.903402 449198 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
I1202 15:17:28.903623 449198 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1202 15:17:28.974207 449198 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-02 15:17:28.961998728 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1202 15:17:28.974366 449198 docker.go:319] overlay module found
I1202 15:17:28.977298 449198 out.go:179] * Using the docker driver based on existing profile
I1202 15:17:28.978780 449198 start.go:309] selected driver: docker
I1202 15:17:28.978801 449198 start.go:927] validating driver "docker" against &{Name:functional-031973 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-031973 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1202 15:17:28.978924 449198 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1202 15:17:28.979041 449198 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1202 15:17:29.050244 449198 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-02 15:17:29.038869576 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1202 15:17:29.050908 449198 cni.go:84] Creating CNI manager for ""
I1202 15:17:29.051007 449198 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1202 15:17:29.051074 449198 start.go:353] cluster config:
{Name:functional-031973 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-031973 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1202 15:17:29.054245 449198 out.go:179] * dry-run validation complete!
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
ad7eaef8b35d6 56cc512116c8f 4 minutes ago Exited mount-munger 0 46052ac8f3adc busybox-mount default
4ab64fdb4167c d4918ca78576a 5 minutes ago Running nginx 0 9638d534f9cc1 nginx-svc default
ea3aa607d0865 9056ab77afb8e 5 minutes ago Running echo-server 0 38683d7f68a4a hello-node-75c85bcc94-8dm24 default
ae1aa2afadc73 9056ab77afb8e 5 minutes ago Running echo-server 0 7454f7d21eb41 hello-node-connect-7d85dfc575-hncff default
c337850ffec5c 01e8bacf0f500 5 minutes ago Running kube-controller-manager 2 de46fe6f6caf9 kube-controller-manager-functional-031973 kube-system
cadd1246401e2 a5f569d49a979 5 minutes ago Running kube-apiserver 0 2adea7d3feb75 kube-apiserver-functional-031973 kube-system
646c1b88c2291 a3e246e9556e9 5 minutes ago Running etcd 1 03c0f3cc2c9a0 etcd-functional-031973 kube-system
569b0e142e127 8aa150647e88a 5 minutes ago Running kube-proxy 1 bbc9d3c7d3116 kube-proxy-zpxn7 kube-system
06c1524f89b8b 01e8bacf0f500 5 minutes ago Exited kube-controller-manager 1 de46fe6f6caf9 kube-controller-manager-functional-031973 kube-system
c8fa8295f09d0 88320b5498ff2 5 minutes ago Running kube-scheduler 1 f9c0c5bed4df1 kube-scheduler-functional-031973 kube-system
b48e38bef4b45 52546a367cc9e 5 minutes ago Running coredns 1 006f7e1e9e593 coredns-66bc5c9577-b94tb kube-system
19f21b5e8580d 409467f978b4a 5 minutes ago Running kindnet-cni 1 b914ab31d57c5 kindnet-z4gbw kube-system
ef1e68dc2307c 6e38f40d628db 5 minutes ago Running storage-provisioner 1 48df44372fd3e storage-provisioner kube-system
85c6d8bf722dc 6e38f40d628db 6 minutes ago Exited storage-provisioner 0 48df44372fd3e storage-provisioner kube-system
84288ecd238d1 52546a367cc9e 6 minutes ago Exited coredns 0 006f7e1e9e593 coredns-66bc5c9577-b94tb kube-system
8db4008180d89 409467f978b4a 6 minutes ago Exited kindnet-cni 0 b914ab31d57c5 kindnet-z4gbw kube-system
c2d18e41fc203 8aa150647e88a 6 minutes ago Exited kube-proxy 0 bbc9d3c7d3116 kube-proxy-zpxn7 kube-system
3850d6885c4a8 88320b5498ff2 6 minutes ago Exited kube-scheduler 0 f9c0c5bed4df1 kube-scheduler-functional-031973 kube-system
7a94f88d6c9f0 a3e246e9556e9 6 minutes ago Exited etcd 0 03c0f3cc2c9a0 etcd-functional-031973 kube-system
==> containerd <==
Dec 02 15:21:48 functional-031973 containerd[3792]: time="2025-12-02T15:21:48.548504730Z" level=info msg="container event discarded" container=cadd1246401e2709608c00abb7ba9788bcb8e60e9c15eb367d9009429020ba0f type=CONTAINER_STARTED_EVENT
Dec 02 15:21:48 functional-031973 containerd[3792]: time="2025-12-02T15:21:48.982591006Z" level=info msg="container event discarded" container=304e1d2b3e8d28ea0e5ecd99c9224c619a48785c8225a8b961bb0e38fcf94d5b type=CONTAINER_DELETED_EVENT
Dec 02 15:21:48 functional-031973 containerd[3792]: time="2025-12-02T15:21:48.982645089Z" level=info msg="container event discarded" container=c337850ffec5cbfad547c07320b3343ad60dcf859bf98a12c95ac2636f334b66 type=CONTAINER_CREATED_EVENT
Dec 02 15:21:49 functional-031973 containerd[3792]: time="2025-12-02T15:21:49.065170181Z" level=info msg="container event discarded" container=c337850ffec5cbfad547c07320b3343ad60dcf859bf98a12c95ac2636f334b66 type=CONTAINER_STARTED_EVENT
Dec 02 15:21:51 functional-031973 containerd[3792]: time="2025-12-02T15:21:51.998743959Z" level=info msg="container event discarded" container=325c273b19ce3626ee1377f7cbb1bb57de4b739c3413425a92dcdd79c186257e type=CONTAINER_STOPPED_EVENT
Dec 02 15:21:52 functional-031973 containerd[3792]: time="2025-12-02T15:21:52.998040292Z" level=info msg="container event discarded" container=abb4b063ffdd986a74f77852bf703a58889b7d9f6a366dd829048fe7a66fc7a9 type=CONTAINER_DELETED_EVENT
Dec 02 15:22:10 functional-031973 containerd[3792]: time="2025-12-02T15:22:10.960374755Z" level=info msg="container event discarded" container=9cfeb97bab0c34c1f07b51fcaa1b2f35df812739d282befec416a6728b6af663 type=CONTAINER_CREATED_EVENT
Dec 02 15:22:10 functional-031973 containerd[3792]: time="2025-12-02T15:22:10.960479369Z" level=info msg="container event discarded" container=9cfeb97bab0c34c1f07b51fcaa1b2f35df812739d282befec416a6728b6af663 type=CONTAINER_STARTED_EVENT
Dec 02 15:22:14 functional-031973 containerd[3792]: time="2025-12-02T15:22:14.041094242Z" level=info msg="container event discarded" container=9cfeb97bab0c34c1f07b51fcaa1b2f35df812739d282befec416a6728b6af663 type=CONTAINER_STOPPED_EVENT
Dec 02 15:22:16 functional-031973 containerd[3792]: time="2025-12-02T15:22:16.339796327Z" level=info msg="container event discarded" container=7454f7d21eb4173a2fcc9281a2434b271f88a3f5b16b38bc8296fb6747262ac2 type=CONTAINER_CREATED_EVENT
Dec 02 15:22:16 functional-031973 containerd[3792]: time="2025-12-02T15:22:16.339875120Z" level=info msg="container event discarded" container=7454f7d21eb4173a2fcc9281a2434b271f88a3f5b16b38bc8296fb6747262ac2 type=CONTAINER_STARTED_EVENT
Dec 02 15:22:16 functional-031973 containerd[3792]: time="2025-12-02T15:22:16.507363218Z" level=info msg="container event discarded" container=38683d7f68a4a44a0cf059979a63a7f26ec26d635d0531803f0d1852528aef05 type=CONTAINER_CREATED_EVENT
Dec 02 15:22:16 functional-031973 containerd[3792]: time="2025-12-02T15:22:16.507443403Z" level=info msg="container event discarded" container=38683d7f68a4a44a0cf059979a63a7f26ec26d635d0531803f0d1852528aef05 type=CONTAINER_STARTED_EVENT
Dec 02 15:22:16 functional-031973 containerd[3792]: time="2025-12-02T15:22:16.625764091Z" level=info msg="container event discarded" container=9638d534f9cc11c7b3553138db86ae152f51bf793fd8c6690fece8feaf32276e type=CONTAINER_CREATED_EVENT
Dec 02 15:22:16 functional-031973 containerd[3792]: time="2025-12-02T15:22:16.625837881Z" level=info msg="container event discarded" container=9638d534f9cc11c7b3553138db86ae152f51bf793fd8c6690fece8feaf32276e type=CONTAINER_STARTED_EVENT
Dec 02 15:22:18 functional-031973 containerd[3792]: time="2025-12-02T15:22:18.185468137Z" level=info msg="container event discarded" container=ae1aa2afadc7333251d88624ce4c05c6201cf7820e7b6f240b0f8f750b5dd3d4 type=CONTAINER_CREATED_EVENT
Dec 02 15:22:18 functional-031973 containerd[3792]: time="2025-12-02T15:22:18.230259490Z" level=info msg="container event discarded" container=ae1aa2afadc7333251d88624ce4c05c6201cf7820e7b6f240b0f8f750b5dd3d4 type=CONTAINER_STARTED_EVENT
Dec 02 15:22:18 functional-031973 containerd[3792]: time="2025-12-02T15:22:18.795069032Z" level=info msg="container event discarded" container=ea3aa607d0865f5186d05378998d4e3ba27baa0ba6dc06509421ece22a3f8a34 type=CONTAINER_CREATED_EVENT
Dec 02 15:22:18 functional-031973 containerd[3792]: time="2025-12-02T15:22:18.844444763Z" level=info msg="container event discarded" container=ea3aa607d0865f5186d05378998d4e3ba27baa0ba6dc06509421ece22a3f8a34 type=CONTAINER_STARTED_EVENT
Dec 02 15:22:21 functional-031973 containerd[3792]: time="2025-12-02T15:22:21.867316613Z" level=info msg="container event discarded" container=4ab64fdb4167c83743b157f78cfa339cf34c3f3c7d8a65af7efa8add4db4edc3 type=CONTAINER_CREATED_EVENT
Dec 02 15:22:21 functional-031973 containerd[3792]: time="2025-12-02T15:22:21.922392722Z" level=info msg="container event discarded" container=4ab64fdb4167c83743b157f78cfa339cf34c3f3c7d8a65af7efa8add4db4edc3 type=CONTAINER_STARTED_EVENT
Dec 02 15:22:22 functional-031973 containerd[3792]: time="2025-12-02T15:22:22.448013865Z" level=info msg="container event discarded" container=c824e559fd6d7eb233460e9bee97a66eba020ffc704242346ca4aa69bf60b56d type=CONTAINER_CREATED_EVENT
Dec 02 15:22:22 functional-031973 containerd[3792]: time="2025-12-02T15:22:22.448071031Z" level=info msg="container event discarded" container=c824e559fd6d7eb233460e9bee97a66eba020ffc704242346ca4aa69bf60b56d type=CONTAINER_STARTED_EVENT
Dec 02 15:22:29 functional-031973 containerd[3792]: time="2025-12-02T15:22:29.902576613Z" level=info msg="container event discarded" container=46052ac8f3adcd68ecfaf90d766201ffb3debad7351ccda45734ca06392e5c13 type=CONTAINER_CREATED_EVENT
Dec 02 15:22:29 functional-031973 containerd[3792]: time="2025-12-02T15:22:29.902816399Z" level=info msg="container event discarded" container=46052ac8f3adcd68ecfaf90d766201ffb3debad7351ccda45734ca06392e5c13 type=CONTAINER_STARTED_EVENT
==> coredns [84288ecd238d1ae9a22d0f967cce2f858ff120a649bf4bb1ed143ac2e88eae81] <==
maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
.:53
[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
CoreDNS-1.12.1
linux/amd64, go1.24.1, 707c7c1
[INFO] 127.0.0.1:46439 - 44210 "HINFO IN 7161019882344419339.5475944101483733167. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.0705958s
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
==> coredns [b48e38bef4b45aff4e9fe11ebb9238a1fa36a1eb7ac89a19b04a3c28f80f0997] <==
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
CoreDNS-1.12.1
linux/amd64, go1.24.1, 707c7c1
[INFO] 127.0.0.1:49535 - 17750 "HINFO IN 3721344718356208668.6506403759620066385. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.026959352s
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
==> describe nodes <==
Name: functional-031973
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=functional-031973
kubernetes.io/os=linux
minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689
minikube.k8s.io/name=functional-031973
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_12_02T15_15_54_0700
minikube.k8s.io/version=v1.37.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 02 Dec 2025 15:15:50 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: functional-031973
AcquireTime: <unset>
RenewTime: Tue, 02 Dec 2025 15:22:26 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Tue, 02 Dec 2025 15:20:55 +0000 Tue, 02 Dec 2025 15:15:49 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Tue, 02 Dec 2025 15:20:55 +0000 Tue, 02 Dec 2025 15:15:49 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Tue, 02 Dec 2025 15:20:55 +0000 Tue, 02 Dec 2025 15:15:49 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Tue, 02 Dec 2025 15:20:55 +0000 Tue, 02 Dec 2025 15:16:10 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.49.2
Hostname: functional-031973
Capacity:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32863360Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32863360Ki
pods: 110
System Info:
Machine ID: c31a325af81b969158c21fa769271857
System UUID: 120a04eb-b735-4dd2-a8e6-bf3b871cface
Boot ID: 54b7568c-9bf9-47f9-8d68-e36a3a33af00
Kernel Version: 6.8.0-1044-gcp
OS Image: Debian GNU/Linux 12 (bookworm)
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://2.1.5
Kubelet Version: v1.34.2
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (15 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default hello-node-75c85bcc94-8dm24 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m14s
default hello-node-connect-7d85dfc575-hncff 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m15s
default mysql-5bb876957f-ljrh9 600m (7%) 700m (8%) 512Mi (1%) 700Mi (2%) 4m51s
default nginx-svc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m14s
default sp-pod 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m8s
kube-system coredns-66bc5c9577-b94tb 100m (1%) 0 (0%) 70Mi (0%) 170Mi (0%) 6m31s
kube-system etcd-functional-031973 100m (1%) 0 (0%) 100Mi (0%) 0 (0%) 6m37s
kube-system kindnet-z4gbw 100m (1%) 100m (1%) 50Mi (0%) 50Mi (0%) 6m33s
kube-system kube-apiserver-functional-031973 250m (3%) 0 (0%) 0 (0%) 0 (0%) 5m40s
kube-system kube-controller-manager-functional-031973 200m (2%) 0 (0%) 0 (0%) 0 (0%) 6m37s
kube-system kube-proxy-zpxn7 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m33s
kube-system kube-scheduler-functional-031973 100m (1%) 0 (0%) 0 (0%) 0 (0%) 6m37s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m31s
kubernetes-dashboard dashboard-metrics-scraper-77bf4d6c4c-wk9xg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m1s
kubernetes-dashboard kubernetes-dashboard-855c9754f9-b6pzr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m1s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 1450m (18%) 800m (10%)
memory 732Mi (2%) 920Mi (2%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 6m30s kube-proxy
Normal Starting 5m31s kube-proxy
Normal NodeHasSufficientMemory 6m42s (x8 over 6m42s) kubelet Node functional-031973 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 6m42s (x8 over 6m42s) kubelet Node functional-031973 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 6m42s (x7 over 6m42s) kubelet Node functional-031973 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 6m42s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 6m37s kubelet Node functional-031973 status is now: NodeHasSufficientMemory
Normal NodeAllocatableEnforced 6m37s kubelet Updated Node Allocatable limit across pods
Normal NodeHasNoDiskPressure 6m37s kubelet Node functional-031973 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 6m37s kubelet Node functional-031973 status is now: NodeHasSufficientPID
Normal Starting 6m37s kubelet Starting kubelet.
Normal RegisteredNode 6m33s node-controller Node functional-031973 event: Registered Node functional-031973 in Controller
Normal NodeReady 6m20s kubelet Node functional-031973 status is now: NodeReady
Normal Starting 5m43s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 5m43s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 5m42s (x8 over 5m43s) kubelet Node functional-031973 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 5m42s (x8 over 5m43s) kubelet Node functional-031973 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 5m42s (x7 over 5m43s) kubelet Node functional-031973 status is now: NodeHasSufficientPID
Normal RegisteredNode 5m37s node-controller Node functional-031973 event: Registered Node functional-031973 in Controller
==> dmesg <==
[ +0.000009] ll header: 00000000: ff ff ff ff ff ff c2 f5 ca ac 67 17 08 06
[ +13.571564] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
[ +0.000009] ll header: 00000000: ff ff ff ff ff ff 56 96 e2 dd 40 21 08 06
[ +0.000361] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
[ +0.000004] ll header: 00000000: ff ff ff ff ff ff c2 f5 ca ac 67 17 08 06
[ +2.699615] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
[ +0.000009] ll header: 00000000: ff ff ff ff ff ff 7e 77 c0 d8 ea 13 08 06
[Dec 2 14:52] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 06 3c f9 c8 55 0b 08 06
[ +0.118748] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff 02 d8 9f 3f ef 99 08 06
[ +0.856727] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff b2 fb 9f 63 58 4b 08 06
[ +14.974602] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff aa c3 c5 ff a1 a9 08 06
[ +0.000340] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 7e 77 c0 d8 ea 13 08 06
[ +2.666742] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 5e 20 e4 1d 98 08 06
[ +0.000370] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 fb 9f 63 58 4b 08 06
[ +24.223711] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 09 24 19 b9 42 08 06
[ +0.000349] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 02 d8 9f 3f ef 99 08 06
==> etcd [646c1b88c2291cb75cfbfa0d6acbe8c8f6efeb9548850bda8083a0da895f1895] <==
{"level":"warn","ts":"2025-12-02T15:16:49.368918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54748","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-02T15:16:49.376172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54766","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-02T15:16:49.384407Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54782","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-02T15:16:49.406190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54810","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-02T15:16:49.412902Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54828","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-02T15:16:49.419629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54836","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-02T15:16:49.426243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54872","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-02T15:16:49.432784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54884","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-02T15:16:49.439580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54890","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-02T15:16:49.448532Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54910","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-02T15:16:49.458893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54912","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-02T15:16:49.466153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54938","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-02T15:16:49.473143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54948","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-02T15:16:49.480092Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54956","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-02T15:16:49.486754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54978","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-02T15:16:49.494019Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55000","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-02T15:16:49.503161Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55022","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-02T15:16:49.511195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55038","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-02T15:16:49.518025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55056","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-02T15:16:49.534149Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55090","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-02T15:16:49.540735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55106","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-02T15:16:49.558898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55110","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-02T15:16:49.565791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55120","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-02T15:16:49.573609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55134","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-02T15:16:49.627411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55162","server-name":"","error":"EOF"}
==> etcd [7a94f88d6c9f001c713f28d38be30c8b80117dc154363dfccf439f82d547fabb] <==
{"level":"warn","ts":"2025-12-02T15:15:50.393142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50320","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-02T15:15:50.400004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50334","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-02T15:15:50.407216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50352","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-02T15:15:50.429152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50372","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-02T15:15:50.436768Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50396","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-02T15:15:50.448340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50412","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-02T15:15:50.489896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50438","server-name":"","error":"EOF"}
{"level":"info","ts":"2025-12-02T15:16:45.855107Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2025-12-02T15:16:45.855196Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-031973","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
{"level":"error","ts":"2025-12-02T15:16:45.855325Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
{"level":"error","ts":"2025-12-02T15:16:45.857011Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
{"level":"error","ts":"2025-12-02T15:16:45.858427Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"info","ts":"2025-12-02T15:16:45.858500Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
{"level":"info","ts":"2025-12-02T15:16:45.858534Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
{"level":"warn","ts":"2025-12-02T15:16:45.858519Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
{"level":"warn","ts":"2025-12-02T15:16:45.858525Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"info","ts":"2025-12-02T15:16:45.858554Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
{"level":"warn","ts":"2025-12-02T15:16:45.858567Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
{"level":"warn","ts":"2025-12-02T15:16:45.858575Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"error","ts":"2025-12-02T15:16:45.858583Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"error","ts":"2025-12-02T15:16:45.858586Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"info","ts":"2025-12-02T15:16:45.860615Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
{"level":"error","ts":"2025-12-02T15:16:45.860709Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"info","ts":"2025-12-02T15:16:45.860741Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2025-12-02T15:16:45.860752Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-031973","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
==> kernel <==
15:22:30 up 2:04, 0 user, load average: 0.10, 0.52, 0.83
Linux functional-031973 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
==> kindnet [19f21b5e8580d8f28a81006ac30e2cb2f04cbd5dcb33e97d6895451934417eeb] <==
I1202 15:20:26.940889 1 main.go:301] handling current node
I1202 15:20:36.944735 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1202 15:20:36.944772 1 main.go:301] handling current node
I1202 15:20:46.941319 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1202 15:20:46.941361 1 main.go:301] handling current node
I1202 15:20:56.941003 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1202 15:20:56.941052 1 main.go:301] handling current node
I1202 15:21:06.941269 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1202 15:21:06.941309 1 main.go:301] handling current node
I1202 15:21:16.941852 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1202 15:21:16.941891 1 main.go:301] handling current node
I1202 15:21:26.941322 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1202 15:21:26.941380 1 main.go:301] handling current node
I1202 15:21:36.940708 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1202 15:21:36.940755 1 main.go:301] handling current node
I1202 15:21:46.946787 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1202 15:21:46.946828 1 main.go:301] handling current node
I1202 15:21:56.941427 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1202 15:21:56.941463 1 main.go:301] handling current node
I1202 15:22:06.949323 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1202 15:22:06.949362 1 main.go:301] handling current node
I1202 15:22:16.944491 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1202 15:22:16.944530 1 main.go:301] handling current node
I1202 15:22:26.940723 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1202 15:22:26.940762 1 main.go:301] handling current node
==> kindnet [8db4008180d89c313b691c3ffc28ed67067eecede802fad652ac37fd6fd36acd] <==
I1202 15:15:59.740136 1 main.go:109] connected to apiserver: https://10.96.0.1:443
I1202 15:15:59.740462 1 main.go:139] hostIP = 192.168.49.2
podIP = 192.168.49.2
I1202 15:15:59.740638 1 main.go:148] setting mtu 1500 for CNI
I1202 15:15:59.740656 1 main.go:178] kindnetd IP family: "ipv4"
I1202 15:15:59.740718 1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
time="2025-12-02T15:15:59Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
I1202 15:15:59.942159 1 controller.go:377] "Starting controller" name="kube-network-policies"
I1202 15:15:59.942193 1 controller.go:381] "Waiting for informer caches to sync"
I1202 15:15:59.942205 1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
I1202 15:15:59.942356 1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
I1202 15:16:00.448425 1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
I1202 15:16:00.448465 1 metrics.go:72] Registering metrics
I1202 15:16:00.448562 1 controller.go:711] "Syncing nftables rules"
I1202 15:16:09.943353 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1202 15:16:09.943468 1 main.go:301] handling current node
I1202 15:16:19.947849 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1202 15:16:19.947892 1 main.go:301] handling current node
I1202 15:16:29.945800 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1202 15:16:29.945842 1 main.go:301] handling current node
==> kube-apiserver [cadd1246401e2709608c00abb7ba9788bcb8e60e9c15eb367d9009429020ba0f] <==
I1202 15:16:50.088066 1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
I1202 15:16:50.088162 1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
I1202 15:16:50.088208 1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
I1202 15:16:50.092638 1 handler_discovery.go:451] Starting ResourceDiscoveryManager
I1202 15:16:50.097887 1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
I1202 15:16:50.111508 1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
I1202 15:16:50.123577 1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
I1202 15:16:50.127784 1 controller.go:667] quota admission added evaluator for: endpoints
I1202 15:16:50.926982 1 controller.go:667] quota admission added evaluator for: serviceaccounts
I1202 15:16:50.991403 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
W1202 15:16:51.297655 1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
I1202 15:16:51.303790 1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
I1202 15:16:51.778956 1 controller.go:667] quota admission added evaluator for: daemonsets.apps
I1202 15:16:51.875708 1 controller.go:667] quota admission added evaluator for: deployments.apps
I1202 15:16:51.933585 1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1202 15:16:51.941645 1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I1202 15:16:53.753031 1 controller.go:667] quota admission added evaluator for: replicasets.apps
I1202 15:17:10.530696 1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.106.2.222"}
I1202 15:17:15.965948 1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.99.119.76"}
I1202 15:17:16.154741 1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.104.98.205"}
I1202 15:17:16.202184 1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.109.87.195"}
I1202 15:17:29.825060 1 controller.go:667] quota admission added evaluator for: namespaces
I1202 15:17:29.960687 1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.153.94"}
I1202 15:17:29.973650 1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.6.186"}
I1202 15:17:39.107718 1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.100.191.136"}
==> kube-controller-manager [06c1524f89b8b1a6e7711d8cea9dec8e489ce09bfdb7e9eeadd318646ca74233] <==
I1202 15:16:37.392510 1 serving.go:386] Generated self-signed cert in-memory
I1202 15:16:38.150126 1 controllermanager.go:191] "Starting" version="v1.34.2"
I1202 15:16:38.150155 1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1202 15:16:38.151661 1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I1202 15:16:38.151685 1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I1202 15:16:38.152002 1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
I1202 15:16:38.152052 1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
E1202 15:16:48.154086 1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
==> kube-controller-manager [c337850ffec5cbfad547c07320b3343ad60dcf859bf98a12c95ac2636f334b66] <==
I1202 15:16:53.453783 1 shared_informer.go:356] "Caches are synced" controller="node"
I1202 15:16:53.453829 1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
I1202 15:16:53.453856 1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
I1202 15:16:53.453874 1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
I1202 15:16:53.453881 1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
I1202 15:16:53.453920 1 shared_informer.go:356] "Caches are synced" controller="namespace"
I1202 15:16:53.454445 1 shared_informer.go:356] "Caches are synced" controller="GC"
I1202 15:16:53.455458 1 shared_informer.go:356] "Caches are synced" controller="resource quota"
I1202 15:16:53.455556 1 shared_informer.go:356] "Caches are synced" controller="resource quota"
I1202 15:16:53.458938 1 shared_informer.go:356] "Caches are synced" controller="deployment"
I1202 15:16:53.461334 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
I1202 15:16:53.461352 1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
I1202 15:16:53.461363 1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
I1202 15:16:53.461628 1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
I1202 15:16:53.463921 1 shared_informer.go:356] "Caches are synced" controller="TTL"
I1202 15:16:53.464062 1 shared_informer.go:356] "Caches are synced" controller="stateful set"
I1202 15:16:53.468396 1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
I1202 15:16:53.470743 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
E1202 15:17:29.890175 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1202 15:17:29.895423 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1202 15:17:29.901866 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1202 15:17:29.902541 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1202 15:17:29.908349 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1202 15:17:29.912406 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1202 15:17:29.913104 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
==> kube-proxy [569b0e142e12766db902223ca7eb146be3849a69f3c33df418b36923d82a585a] <==
I1202 15:16:36.597068 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
E1202 15:16:36.598151 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-031973&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1202 15:16:37.841108 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-031973&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1202 15:16:40.338703 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-031973&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1202 15:16:45.857370 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-031973&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
I1202 15:16:58.598080 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1202 15:16:58.598118 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
E1202 15:16:58.598231 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1202 15:16:58.621795 1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I1202 15:16:58.621848 1 server_linux.go:132] "Using iptables Proxier"
I1202 15:16:58.627324 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1202 15:16:58.627586 1 server.go:527] "Version info" version="v1.34.2"
I1202 15:16:58.627599 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1202 15:16:58.628806 1 config.go:200] "Starting service config controller"
I1202 15:16:58.628830 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1202 15:16:58.628871 1 config.go:106] "Starting endpoint slice config controller"
I1202 15:16:58.628889 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1202 15:16:58.628923 1 config.go:403] "Starting serviceCIDR config controller"
I1202 15:16:58.628876 1 config.go:309] "Starting node config controller"
I1202 15:16:58.628957 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1202 15:16:58.628966 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1202 15:16:58.628972 1 shared_informer.go:356] "Caches are synced" controller="node config"
I1202 15:16:58.729789 1 shared_informer.go:356] "Caches are synced" controller="service config"
I1202 15:16:58.729915 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
I1202 15:16:58.729953 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
==> kube-proxy [c2d18e41fc203eee96d6a09dfee77221ae299daef844af8d7758972f0d5eebd6] <==
I1202 15:15:59.236263 1 server_linux.go:53] "Using iptables proxy"
I1202 15:15:59.310604 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I1202 15:15:59.411054 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1202 15:15:59.411094 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
E1202 15:15:59.411211 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1202 15:15:59.458243 1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I1202 15:15:59.458294 1 server_linux.go:132] "Using iptables Proxier"
I1202 15:15:59.464059 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1202 15:15:59.464551 1 server.go:527] "Version info" version="v1.34.2"
I1202 15:15:59.464588 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1202 15:15:59.466130 1 config.go:403] "Starting serviceCIDR config controller"
I1202 15:15:59.466142 1 config.go:309] "Starting node config controller"
I1202 15:15:59.466157 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1202 15:15:59.466162 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1202 15:15:59.466183 1 shared_informer.go:356] "Caches are synced" controller="node config"
I1202 15:15:59.466219 1 config.go:200] "Starting service config controller"
I1202 15:15:59.466231 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1202 15:15:59.466205 1 config.go:106] "Starting endpoint slice config controller"
I1202 15:15:59.466263 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1202 15:15:59.566412 1 shared_informer.go:356] "Caches are synced" controller="service config"
I1202 15:15:59.566431 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
I1202 15:15:59.566517 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
==> kube-scheduler [3850d6885c4a8427a31f9c1e3c8dfc49dde93cc3abd5127ae5b5e17c87485b87] <==
E1202 15:15:50.912434 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1202 15:15:50.912471 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
E1202 15:15:50.912528 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E1202 15:15:50.912901 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
E1202 15:15:50.912944 1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
E1202 15:15:51.890107 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1202 15:15:51.920213 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
E1202 15:15:51.922190 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
E1202 15:15:51.923075 1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
E1202 15:15:52.010654 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1202 15:15:52.071843 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
E1202 15:15:52.119186 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E1202 15:15:52.132528 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1202 15:15:52.144768 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
E1202 15:15:52.154877 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E1202 15:15:52.187203 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1202 15:15:52.196387 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1202 15:15:52.354774 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
I1202 15:15:54.408421 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1202 15:16:35.635194 1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1202 15:16:35.635272 1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
I1202 15:16:35.635369 1 server.go:263] "[graceful-termination] secure server has stopped listening"
I1202 15:16:35.635381 1 server.go:265] "[graceful-termination] secure server is exiting"
I1202 15:16:35.635281 1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
E1202 15:16:35.635400 1 run.go:72] "command failed" err="finished without leader elect"
==> kube-scheduler [c8fa8295f09d01cc139eda620db6d699a0081f04519fd714f09996c687592e9e] <==
E1202 15:16:41.871504 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
E1202 15:16:41.883003 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
E1202 15:16:41.978092 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1202 15:16:42.223233 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1202 15:16:42.331606 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1202 15:16:44.153592 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
E1202 15:16:44.898779 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1202 15:16:45.096198 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1202 15:16:45.140830 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E1202 15:16:45.280464 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1202 15:16:45.364422 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
E1202 15:16:45.426247 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
E1202 15:16:45.761211 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
E1202 15:16:45.954414 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
E1202 15:16:46.119488 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1202 15:16:46.263246 1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
E1202 15:16:46.408248 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1202 15:16:46.563072 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
E1202 15:16:46.612942 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
E1202 15:16:47.095235 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
E1202 15:16:47.397805 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E1202 15:16:47.457860 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
E1202 15:16:47.479575 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1202 15:16:47.828022 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
I1202 15:17:00.145159 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
==> kubelet <==
Dec 02 15:21:07 functional-031973 kubelet[4768]: E1202 15:21:07.924849 4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-ljrh9" podUID="e67844aa-4a0f-4537-a2b2-6900a351107b"
Dec 02 15:21:17 functional-031973 kubelet[4768]: E1202 15:21:17.925019 4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-b6pzr" podUID="269a2c1f-95ff-4df0-9a9d-7180a420df00"
Dec 02 15:21:17 functional-031973 kubelet[4768]: E1202 15:21:17.925019 4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-wk9xg" podUID="7267d7f8-815a-4d11-9486-1d841
04bf640"
Dec 02 15:21:18 functional-031973 kubelet[4768]: E1202 15:21:18.923334 4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="038e8210-22af-4586-bb52-4b0ff00eb7be"
Dec 02 15:21:22 functional-031973 kubelet[4768]: E1202 15:21:22.924681 4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-ljrh9" podUID="e67844aa-4a0f-4537-a2b2-6900a351107b"
Dec 02 15:21:29 functional-031973 kubelet[4768]: E1202 15:21:29.925219 4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-wk9xg" podUID="7267d7f8-815a-4d11-9486-1d841
04bf640"
Dec 02 15:21:31 functional-031973 kubelet[4768]: E1202 15:21:31.923994 4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="038e8210-22af-4586-bb52-4b0ff00eb7be"
Dec 02 15:21:31 functional-031973 kubelet[4768]: E1202 15:21:31.925318 4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-b6pzr" podUID="269a2c1f-95ff-4df0-9a9d-7180a420df00"
Dec 02 15:21:36 functional-031973 kubelet[4768]: E1202 15:21:36.924858 4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-ljrh9" podUID="e67844aa-4a0f-4537-a2b2-6900a351107b"
Dec 02 15:21:42 functional-031973 kubelet[4768]: E1202 15:21:42.925125 4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-b6pzr" podUID="269a2c1f-95ff-4df0-9a9d-7180a420df00"
Dec 02 15:21:44 functional-031973 kubelet[4768]: E1202 15:21:44.925000 4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-wk9xg" podUID="7267d7f8-815a-4d11-9486-1d841
04bf640"
Dec 02 15:21:45 functional-031973 kubelet[4768]: E1202 15:21:45.924363 4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="038e8210-22af-4586-bb52-4b0ff00eb7be"
Dec 02 15:21:51 functional-031973 kubelet[4768]: E1202 15:21:51.924590 4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-ljrh9" podUID="e67844aa-4a0f-4537-a2b2-6900a351107b"
Dec 02 15:21:53 functional-031973 kubelet[4768]: E1202 15:21:53.927514 4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-b6pzr" podUID="269a2c1f-95ff-4df0-9a9d-7180a420df00"
Dec 02 15:21:58 functional-031973 kubelet[4768]: E1202 15:21:58.924761 4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-wk9xg" podUID="7267d7f8-815a-4d11-9486-1d841
04bf640"
Dec 02 15:22:00 functional-031973 kubelet[4768]: E1202 15:22:00.923573 4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="038e8210-22af-4586-bb52-4b0ff00eb7be"
Dec 02 15:22:03 functional-031973 kubelet[4768]: E1202 15:22:03.924995 4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-ljrh9" podUID="e67844aa-4a0f-4537-a2b2-6900a351107b"
Dec 02 15:22:07 functional-031973 kubelet[4768]: E1202 15:22:07.925376 4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-b6pzr" podUID="269a2c1f-95ff-4df0-9a9d-7180a420df00"
Dec 02 15:22:12 functional-031973 kubelet[4768]: E1202 15:22:12.924459 4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="038e8210-22af-4586-bb52-4b0ff00eb7be"
Dec 02 15:22:13 functional-031973 kubelet[4768]: E1202 15:22:13.925201 4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-wk9xg" podUID="7267d7f8-815a-4d11-9486-1d841
04bf640"
Dec 02 15:22:14 functional-031973 kubelet[4768]: E1202 15:22:14.924417 4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-ljrh9" podUID="e67844aa-4a0f-4537-a2b2-6900a351107b"
Dec 02 15:22:19 functional-031973 kubelet[4768]: E1202 15:22:19.925284 4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-b6pzr" podUID="269a2c1f-95ff-4df0-9a9d-7180a420df00"
Dec 02 15:22:23 functional-031973 kubelet[4768]: E1202 15:22:23.924071 4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="038e8210-22af-4586-bb52-4b0ff00eb7be"
Dec 02 15:22:25 functional-031973 kubelet[4768]: E1202 15:22:25.924453 4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-ljrh9" podUID="e67844aa-4a0f-4537-a2b2-6900a351107b"
Dec 02 15:22:27 functional-031973 kubelet[4768]: E1202 15:22:27.925442 4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-wk9xg" podUID="7267d7f8-815a-4d11-9486-1d841
04bf640"
==> storage-provisioner [85c6d8bf722dcb136812c6f14c45b5d380b1de637a1b3615b9d1d2b7fb98940c] <==
W1202 15:16:10.586263 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
I1202 15:16:10.586468 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I1202 15:16:10.586687 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-031973_397fa3ca-6b00-4900-82e0-268f547da5e7!
I1202 15:16:10.586920 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"06f31c89-8177-4473-aea9-89a84ed0b889", APIVersion:"v1", ResourceVersion:"446", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-031973_397fa3ca-6b00-4900-82e0-268f547da5e7 became leader
W1202 15:16:10.589335 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:16:10.592545 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
I1202 15:16:10.687378 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-031973_397fa3ca-6b00-4900-82e0-268f547da5e7!
W1202 15:16:12.596560 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:16:12.601329 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:16:14.604965 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:16:14.614131 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:16:16.617205 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:16:16.621029 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:16:18.624813 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:16:18.630072 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:16:20.633409 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:16:20.637710 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:16:22.641582 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:16:22.646626 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:16:24.650290 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:16:24.655142 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:16:26.658929 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:16:26.664778 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:16:28.668074 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:16:28.672710 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
==> storage-provisioner [ef1e68dc2307c5daf3aa5cdb63ca8b1bb338e7f8dfd850d51a666ac3747a2970] <==
W1202 15:22:05.414158 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:22:07.417238 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:22:07.421879 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:22:09.425884 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:22:09.430109 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:22:11.433511 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:22:11.437616 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:22:13.441118 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:22:13.445624 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:22:15.449074 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:22:15.454476 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:22:17.457437 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:22:17.462411 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:22:19.465786 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:22:19.470388 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:22:21.473986 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:22:21.479323 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:22:23.482687 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:22:23.486733 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:22:25.490154 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:22:25.495164 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:22:27.498617 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:22:27.502865 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:22:29.506067 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:22:29.510094 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
-- /stdout --
helpers_test.go:262: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-031973 -n functional-031973
helpers_test.go:269: (dbg) Run: kubectl --context functional-031973 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount mysql-5bb876957f-ljrh9 sp-pod dashboard-metrics-scraper-77bf4d6c4c-wk9xg kubernetes-dashboard-855c9754f9-b6pzr
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run: kubectl --context functional-031973 describe pod busybox-mount mysql-5bb876957f-ljrh9 sp-pod dashboard-metrics-scraper-77bf4d6c4c-wk9xg kubernetes-dashboard-855c9754f9-b6pzr
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-031973 describe pod busybox-mount mysql-5bb876957f-ljrh9 sp-pod dashboard-metrics-scraper-77bf4d6c4c-wk9xg kubernetes-dashboard-855c9754f9-b6pzr: exit status 1 (85.33647ms)
-- stdout --
Name: busybox-mount
Namespace: default
Priority: 0
Service Account: default
Node: functional-031973/192.168.49.2
Start Time: Tue, 02 Dec 2025 15:17:29 +0000
Labels: integration-test=busybox-mount
Annotations: <none>
Status: Succeeded
IP: 10.244.0.8
IPs:
IP: 10.244.0.8
Containers:
mount-munger:
Container ID: containerd://ad7eaef8b35d60e1dce92546738e84d4c79a3cf6d207f3f4a48a68cfd880b1ae
Image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID: gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
Port: <none>
Host Port: <none>
Command:
/bin/sh
-c
--
Args:
cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 02 Dec 2025 15:17:32 +0000
Finished: Tue, 02 Dec 2025 15:17:32 +0000
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/mount-9p from test-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x6jsv (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
test-volume:
Type: HostPath (bare host directory volume)
Path: /mount-9p
HostPathType:
kube-api-access-x6jsv:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m2s default-scheduler Successfully assigned default/busybox-mount to functional-031973
Normal Pulling 5m2s kubelet Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Normal Pulled 5m kubelet Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.085s (2.085s including waiting). Image size: 2395207 bytes.
Normal Created 4m59s kubelet Created container: mount-munger
Normal Started 4m59s kubelet Started container mount-munger
Name: mysql-5bb876957f-ljrh9
Namespace: default
Priority: 0
Service Account: default
Node: functional-031973/192.168.49.2
Start Time: Tue, 02 Dec 2025 15:17:39 +0000
Labels: app=mysql
pod-template-hash=5bb876957f
Annotations: <none>
Status: Pending
IP: 10.244.0.11
IPs:
IP: 10.244.0.11
Controlled By: ReplicaSet/mysql-5bb876957f
Containers:
mysql:
Container ID:
Image: docker.io/mysql:5.7
Image ID:
Port: 3306/TCP (mysql)
Host Port: 0/TCP (mysql)
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Limits:
cpu: 700m
memory: 700Mi
Requests:
cpu: 600m
memory: 512Mi
Environment:
MYSQL_ROOT_PASSWORD: password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gk92d (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-gk92d:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4m52s default-scheduler Successfully assigned default/mysql-5bb876957f-ljrh9 to functional-031973
Normal Pulling 98s (x5 over 4m52s) kubelet Pulling image "docker.io/mysql:5.7"
Warning Failed 95s (x5 over 4m49s) kubelet Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests
toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning Failed 95s (x5 over 4m49s) kubelet Error: ErrImagePull
Warning Failed 40s (x15 over 4m48s) kubelet Error: ImagePullBackOff
Normal BackOff 6s (x18 over 4m48s) kubelet Back-off pulling image "docker.io/mysql:5.7"
Name: sp-pod
Namespace: default
Priority: 0
Service Account: default
Node: functional-031973/192.168.49.2
Start Time: Tue, 02 Dec 2025 15:17:22 +0000
Labels: test=storage-provisioner
Annotations: <none>
Status: Pending
IP: 10.244.0.7
IPs:
IP: 10.244.0.7
Containers:
myfrontend:
Container ID:
Image: docker.io/nginx
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-slhv5 (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
mypd:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: myclaim
ReadOnly: false
kube-api-access-slhv5:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m9s default-scheduler Successfully assigned default/sp-pod to functional-031973
Warning Failed 3m43s (x4 over 5m7s) kubelet Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests
toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal Pulling 2m15s (x5 over 5m9s) kubelet Pulling image "docker.io/nginx"
Warning Failed 2m12s (x5 over 5m7s) kubelet Error: ErrImagePull
Warning Failed 2m12s kubelet Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541: 429 Too Many Requests
toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning Failed 73s (x15 over 5m6s) kubelet Error: ImagePullBackOff
Normal BackOff 8s (x20 over 5m6s) kubelet Back-off pulling image "docker.io/nginx"
-- /stdout --
** stderr **
Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-wk9xg" not found
Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-b6pzr" not found
** /stderr **
helpers_test.go:287: kubectl --context functional-031973 describe pod busybox-mount mysql-5bb876957f-ljrh9 sp-pod dashboard-metrics-scraper-77bf4d6c4c-wk9xg kubernetes-dashboard-855c9754f9-b6pzr: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (302.32s)