=== RUN TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== CONT TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-169724 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-169724 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-169724 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-169724 --alsologtostderr -v=1] stderr:
I1202 15:21:56.913625 643687 out.go:360] Setting OutFile to fd 1 ...
I1202 15:21:56.913759 643687 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:21:56.913770 643687 out.go:374] Setting ErrFile to fd 2...
I1202 15:21:56.913778 643687 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:21:56.914127 643687 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-563346/.minikube/bin
I1202 15:21:56.914519 643687 mustload.go:66] Loading cluster: functional-169724
I1202 15:21:56.914952 643687 config.go:182] Loaded profile config "functional-169724": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1202 15:21:56.915387 643687 cli_runner.go:164] Run: docker container inspect functional-169724 --format={{.State.Status}}
I1202 15:21:56.940802 643687 host.go:66] Checking if "functional-169724" exists ...
I1202 15:21:56.941274 643687 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1202 15:21:57.013817 643687 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-02 15:21:57.001790665 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1202 15:21:57.013989 643687 api_server.go:166] Checking apiserver status ...
I1202 15:21:57.014043 643687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1202 15:21:57.014096 643687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-169724
I1202 15:21:57.034065 643687 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/22021-563346/.minikube/machines/functional-169724/id_rsa Username:docker}
I1202 15:21:57.144051 643687 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/10082/cgroup
W1202 15:21:57.154764 643687 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/10082/cgroup: Process exited with status 1
stdout:
stderr:
I1202 15:21:57.154845 643687 ssh_runner.go:195] Run: ls
I1202 15:21:57.159174 643687 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I1202 15:21:57.164476 643687 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W1202 15:21:57.164535 643687 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1202 15:21:57.164705 643687 config.go:182] Loaded profile config "functional-169724": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1202 15:21:57.164723 643687 addons.go:70] Setting dashboard=true in profile "functional-169724"
I1202 15:21:57.164730 643687 addons.go:239] Setting addon dashboard=true in "functional-169724"
I1202 15:21:57.164755 643687 host.go:66] Checking if "functional-169724" exists ...
I1202 15:21:57.165082 643687 cli_runner.go:164] Run: docker container inspect functional-169724 --format={{.State.Status}}
I1202 15:21:57.238399 643687 out.go:179] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1202 15:21:57.280084 643687 out.go:179] - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1202 15:21:57.299407 643687 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1202 15:21:57.299462 643687 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1202 15:21:57.299550 643687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-169724
I1202 15:21:57.322639 643687 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/22021-563346/.minikube/machines/functional-169724/id_rsa Username:docker}
I1202 15:21:57.437880 643687 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1202 15:21:57.437909 643687 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1202 15:21:57.451798 643687 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1202 15:21:57.451828 643687 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1202 15:21:57.466994 643687 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1202 15:21:57.467028 643687 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1202 15:21:57.481466 643687 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1202 15:21:57.481490 643687 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1202 15:21:57.495648 643687 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
I1202 15:21:57.495677 643687 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1202 15:21:57.509104 643687 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1202 15:21:57.509135 643687 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1202 15:21:57.522545 643687 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1202 15:21:57.522569 643687 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1202 15:21:57.536043 643687 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1202 15:21:57.536066 643687 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1202 15:21:57.549292 643687 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1202 15:21:57.549324 643687 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1202 15:21:57.563076 643687 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1202 15:21:58.080172 643687 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p functional-169724 addons enable metrics-server
I1202 15:21:58.081541 643687 addons.go:202] Writing out "functional-169724" config to set dashboard=true...
W1202 15:21:58.081806 643687 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1202 15:21:58.082510 643687 kapi.go:59] client config for functional-169724: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22021-563346/.minikube/profiles/functional-169724/client.crt", KeyFile:"/home/jenkins/minikube-integration/22021-563346/.minikube/profiles/functional-169724/client.key", CAFile:"/home/jenkins/minikube-integration/22021-563346/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2815480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1202 15:21:58.082995 643687 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1202 15:21:58.083012 643687 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1202 15:21:58.083017 643687 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1202 15:21:58.083021 643687 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1202 15:21:58.083025 643687 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1202 15:21:58.092149 643687 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard kubernetes-dashboard 64c9067d-1c50-47a9-bfb2-e9297465827a 868 0 2025-12-02 15:21:58 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-12-02 15:21:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.104.110.134,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.104.110.134],IPFamilies:[IPv4],AllocateLoadBalan
cerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1202 15:21:58.092348 643687 out.go:285] * Launching proxy ...
* Launching proxy ...
I1202 15:21:58.092410 643687 dashboard.go:154] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-169724 proxy --port 36195]
I1202 15:21:58.092705 643687 dashboard.go:159] Waiting for kubectl to output host:port ...
I1202 15:21:58.146720 643687 dashboard.go:177] proxy stdout: Starting to serve on 127.0.0.1:36195
W1202 15:21:58.146793 643687 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I1202 15:21:58.156069 643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[43088be5-e7d4-445a-ab08-daed4da90f4d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:21:58 GMT]] Body:0xc00081ef40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005eef00 TLS:<nil>}
I1202 15:21:58.156156 643687 retry.go:31] will retry after 142.839µs: Temporary Error: unexpected response code: 503
I1202 15:21:58.159945 643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[84563250-8083-41fb-aec1-c99b89115388] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:21:58 GMT]] Body:0xc00090bbc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c0780 TLS:<nil>}
I1202 15:21:58.160000 643687 retry.go:31] will retry after 194.404µs: Temporary Error: unexpected response code: 503
I1202 15:21:58.163490 643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c3ad115c-1a23-4ca0-ad6b-ea47f9dc4c86] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:21:58 GMT]] Body:0xc00081f040 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005ef040 TLS:<nil>}
I1202 15:21:58.163540 643687 retry.go:31] will retry after 197.614µs: Temporary Error: unexpected response code: 503
I1202 15:21:58.166832 643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cc2b120b-6cf3-4a19-9bc7-3c695639164d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:21:58 GMT]] Body:0xc00090bcc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c08c0 TLS:<nil>}
I1202 15:21:58.166892 643687 retry.go:31] will retry after 250.117µs: Temporary Error: unexpected response code: 503
I1202 15:21:58.170742 643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d2f624dd-251f-494e-8a2d-db5d7bd6614a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:21:58 GMT]] Body:0xc00083c780 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005ef180 TLS:<nil>}
I1202 15:21:58.170809 643687 retry.go:31] will retry after 611.23µs: Temporary Error: unexpected response code: 503
I1202 15:21:58.174586 643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[628b1589-9cc8-4621-9520-c92fef1f8d18] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:21:58 GMT]] Body:0xc00090bdc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000349180 TLS:<nil>}
I1202 15:21:58.174684 643687 retry.go:31] will retry after 956.894µs: Temporary Error: unexpected response code: 503
I1202 15:21:58.178319 643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3072942f-d6c2-4a62-a421-472e200fdd06] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:21:58 GMT]] Body:0xc00090be40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c0a00 TLS:<nil>}
I1202 15:21:58.178386 643687 retry.go:31] will retry after 688.898µs: Temporary Error: unexpected response code: 503
I1202 15:21:58.181916 643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8faa2677-ac55-4eed-9540-6575d2623121] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:21:58 GMT]] Body:0xc00083c840 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005ef2c0 TLS:<nil>}
I1202 15:21:58.182003 643687 retry.go:31] will retry after 1.320838ms: Temporary Error: unexpected response code: 503
I1202 15:21:58.186748 643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ec7c767c-a5fd-4d96-913b-fc5d1181e585] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:21:58 GMT]] Body:0xc00090bf40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003492c0 TLS:<nil>}
I1202 15:21:58.186808 643687 retry.go:31] will retry after 1.332749ms: Temporary Error: unexpected response code: 503
I1202 15:21:58.191308 643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1497bb58-85f3-49e4-ab37-8fddca921685] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:21:58 GMT]] Body:0xc00081f200 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005ef400 TLS:<nil>}
I1202 15:21:58.191399 643687 retry.go:31] will retry after 2.988593ms: Temporary Error: unexpected response code: 503
I1202 15:21:58.197973 643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cae6e7d8-b390-4faf-9edc-6f061ce849bd] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:21:58 GMT]] Body:0xc000888a00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c0b40 TLS:<nil>}
I1202 15:21:58.198051 643687 retry.go:31] will retry after 8.489819ms: Temporary Error: unexpected response code: 503
I1202 15:21:58.210456 643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[24c85625-598d-413a-a936-249068a2e461] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:21:58 GMT]] Body:0xc00083c940 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c0c80 TLS:<nil>}
I1202 15:21:58.210537 643687 retry.go:31] will retry after 6.771642ms: Temporary Error: unexpected response code: 503
I1202 15:21:58.221053 643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[487addb1-ed41-460f-a342-b74a4c71e825] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:21:58 GMT]] Body:0xc000888d40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000349400 TLS:<nil>}
I1202 15:21:58.221133 643687 retry.go:31] will retry after 16.452845ms: Temporary Error: unexpected response code: 503
I1202 15:21:58.241309 643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[75d2b107-8078-4f51-80ca-85fdd2b78abf] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:21:58 GMT]] Body:0xc00081f380 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005ef540 TLS:<nil>}
I1202 15:21:58.241368 643687 retry.go:31] will retry after 19.353478ms: Temporary Error: unexpected response code: 503
I1202 15:21:58.264443 643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0d55cb24-72bf-4209-8ea0-a3c023cb8698] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:21:58 GMT]] Body:0xc000888e80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c0dc0 TLS:<nil>}
I1202 15:21:58.264517 643687 retry.go:31] will retry after 19.498497ms: Temporary Error: unexpected response code: 503
I1202 15:21:58.287373 643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0348b572-6ceb-433a-bb24-62201fbb1cc3] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:21:58 GMT]] Body:0xc00081f400 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005ef680 TLS:<nil>}
I1202 15:21:58.287441 643687 retry.go:31] will retry after 53.283125ms: Temporary Error: unexpected response code: 503
I1202 15:21:58.344688 643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e78d8ec2-757c-47c9-bc17-147d835359d3] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:21:58 GMT]] Body:0xc00083cac0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c0f00 TLS:<nil>}
I1202 15:21:58.344765 643687 retry.go:31] will retry after 84.174374ms: Temporary Error: unexpected response code: 503
I1202 15:21:58.432365 643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ed220eb1-ace8-4269-949a-5020b5cb8a06] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:21:58 GMT]] Body:0xc00083cb80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000349540 TLS:<nil>}
I1202 15:21:58.432440 643687 retry.go:31] will retry after 72.25084ms: Temporary Error: unexpected response code: 503
I1202 15:21:58.508949 643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6c052ca4-aa94-4f18-9d92-8b14f2a1e060] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:21:58 GMT]] Body:0xc000889000 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000349680 TLS:<nil>}
I1202 15:21:58.509023 643687 retry.go:31] will retry after 188.65018ms: Temporary Error: unexpected response code: 503
I1202 15:21:58.701349 643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[bc184844-b2ee-4cf2-a05c-803b8fc73dd9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:21:58 GMT]] Body:0xc00081f540 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005ef7c0 TLS:<nil>}
I1202 15:21:58.701411 643687 retry.go:31] will retry after 172.035007ms: Temporary Error: unexpected response code: 503
I1202 15:21:58.877164 643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ef425056-52d5-4ea9-b4fb-c7a490a125c8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:21:58 GMT]] Body:0xc00083cc40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c1040 TLS:<nil>}
I1202 15:21:58.877314 643687 retry.go:31] will retry after 389.554091ms: Temporary Error: unexpected response code: 503
I1202 15:21:59.270735 643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9a1b5076-ce3e-4820-bee7-0a6b00d869bc] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:21:59 GMT]] Body:0xc000889180 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003497c0 TLS:<nil>}
I1202 15:21:59.270819 643687 retry.go:31] will retry after 671.461659ms: Temporary Error: unexpected response code: 503
I1202 15:21:59.945535 643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[464969f7-6a46-4821-9749-3e630e306743] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:21:59 GMT]] Body:0xc00083cd40 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005efb80 TLS:<nil>}
I1202 15:21:59.945617 643687 retry.go:31] will retry after 860.771622ms: Temporary Error: unexpected response code: 503
I1202 15:22:00.810259 643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[88420ad2-b589-4a7c-885f-7b33475b2f34] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:22:00 GMT]] Body:0xc00081f640 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000349900 TLS:<nil>}
I1202 15:22:00.810339 643687 retry.go:31] will retry after 1.477272918s: Temporary Error: unexpected response code: 503
I1202 15:22:02.290778 643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e32c49d8-2057-45ce-b0e2-bac92b27de5d] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:22:02 GMT]] Body:0xc00083cec0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000349a40 TLS:<nil>}
I1202 15:22:02.290843 643687 retry.go:31] will retry after 1.15700599s: Temporary Error: unexpected response code: 503
I1202 15:22:03.451362 643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[70cadeb5-5337-40ce-8fe4-045e0b437b98] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:22:03 GMT]] Body:0xc0008892c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000349b80 TLS:<nil>}
I1202 15:22:03.451465 643687 retry.go:31] will retry after 2.292837912s: Temporary Error: unexpected response code: 503
I1202 15:22:05.747860 643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5403f731-f5a4-4cfc-be4d-17847a552164] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:22:05 GMT]] Body:0xc00081f6c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005efcc0 TLS:<nil>}
I1202 15:22:05.747923 643687 retry.go:31] will retry after 2.133191737s: Temporary Error: unexpected response code: 503
I1202 15:22:07.884356 643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[64bdadc9-32af-46e2-aa0d-ad39c921052b] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:22:07 GMT]] Body:0xc00081f740 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005efe00 TLS:<nil>}
I1202 15:22:07.884416 643687 retry.go:31] will retry after 6.650152813s: Temporary Error: unexpected response code: 503
I1202 15:22:14.539605 643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4799373e-67dd-47c4-8e77-7226976ad8bd] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:22:14 GMT]] Body:0xc000889500 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001cc280 TLS:<nil>}
I1202 15:22:14.539690 643687 retry.go:31] will retry after 11.218773495s: Temporary Error: unexpected response code: 503
I1202 15:22:25.763777 643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a8e926fb-b957-4b82-af40-61a4769d6cc9] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:22:25 GMT]] Body:0xc000889580 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c1180 TLS:<nil>}
I1202 15:22:25.763854 643687 retry.go:31] will retry after 6.934262968s: Temporary Error: unexpected response code: 503
I1202 15:22:32.701920 643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f20e6a4d-6c08-4bc8-a916-bc73d2f458fc] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:22:32 GMT]] Body:0xc00083cfc0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001cc3c0 TLS:<nil>}
I1202 15:22:32.702018 643687 retry.go:31] will retry after 22.295270073s: Temporary Error: unexpected response code: 503
I1202 15:22:55.000984 643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9b3df7fc-3edd-4ea5-93fc-b90fec0e7c99] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:22:55 GMT]] Body:0xc00081f8c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001cc500 TLS:<nil>}
I1202 15:22:55.001068 643687 retry.go:31] will retry after 27.795931633s: Temporary Error: unexpected response code: 503
I1202 15:23:22.801191 643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0678e711-6fc2-4600-9eb0-55d5d78a338a] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:23:22 GMT]] Body:0xc00081f980 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c12c0 TLS:<nil>}
I1202 15:23:22.801284 643687 retry.go:31] will retry after 1m1.074600245s: Temporary Error: unexpected response code: 503
I1202 15:24:23.879296 643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[719a025b-56de-4ae3-8692-8f29bd145296] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:24:23 GMT]] Body:0xc00081e040 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001cc640 TLS:<nil>}
I1202 15:24:23.879390 643687 retry.go:31] will retry after 39.781013294s: Temporary Error: unexpected response code: 503
I1202 15:25:03.664423 643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f436ab2f-d5a2-4673-a0a9-618604f48c79] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:25:03 GMT]] Body:0xc00078e0c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c0000 TLS:<nil>}
I1202 15:25:03.664499 643687 retry.go:31] will retry after 1m28.625534719s: Temporary Error: unexpected response code: 503
I1202 15:26:32.294277 643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[664a9203-f8e1-4f3b-a348-47d787d6c77f] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:26:32 GMT]] Body:0xc00081e040 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c0140 TLS:<nil>}
I1202 15:26:32.294369 643687 retry.go:31] will retry after 34.082644645s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run: docker inspect functional-169724
helpers_test.go:243: (dbg) docker inspect functional-169724:
-- stdout --
[
{
"Id": "6aca14b454067585a9f00028d5845488d973f184b936306a121375ca3fc8322e",
"Created": "2025-12-02T15:18:38.356956471Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 623776,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-12-02T15:18:38.392277615Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
"ResolvConfPath": "/var/lib/docker/containers/6aca14b454067585a9f00028d5845488d973f184b936306a121375ca3fc8322e/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/6aca14b454067585a9f00028d5845488d973f184b936306a121375ca3fc8322e/hostname",
"HostsPath": "/var/lib/docker/containers/6aca14b454067585a9f00028d5845488d973f184b936306a121375ca3fc8322e/hosts",
"LogPath": "/var/lib/docker/containers/6aca14b454067585a9f00028d5845488d973f184b936306a121375ca3fc8322e/6aca14b454067585a9f00028d5845488d973f184b936306a121375ca3fc8322e-json.log",
"Name": "/functional-169724",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"functional-169724:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {
"max-size": "100m"
}
},
"NetworkMode": "functional-169724",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "private",
"Dns": null,
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 4294967296,
"NanoCpus": 0,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 8589934592,
"MemorySwappiness": null,
"OomKillDisable": null,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "6aca14b454067585a9f00028d5845488d973f184b936306a121375ca3fc8322e",
"LowerDir": "/var/lib/docker/overlay2/01183119e5159d4abe8a85b62d0e6721d14eaec763519f5c1a1bd63f83b7ca62-init/diff:/var/lib/docker/overlay2/07ec335befb7b26acaacda7ed9253badae67627e1c23bce677fab65b2eb5425a/diff",
"MergedDir": "/var/lib/docker/overlay2/01183119e5159d4abe8a85b62d0e6721d14eaec763519f5c1a1bd63f83b7ca62/merged",
"UpperDir": "/var/lib/docker/overlay2/01183119e5159d4abe8a85b62d0e6721d14eaec763519f5c1a1bd63f83b7ca62/diff",
"WorkDir": "/var/lib/docker/overlay2/01183119e5159d4abe8a85b62d0e6721d14eaec763519f5c1a1bd63f83b7ca62/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "functional-169724",
"Source": "/var/lib/docker/volumes/functional-169724/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "functional-169724",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8441/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "functional-169724",
"name.minikube.sigs.k8s.io": "functional-169724",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"SandboxID": "e2ce3823480529bac422fa191445f465825bae9fde6bbb6696f94ab8b9a30fe8",
"SandboxKey": "/var/run/docker/netns/e2ce38234805",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33184"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33185"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33188"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33186"
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33187"
}
]
},
"Networks": {
"functional-169724": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2",
"IPv6Address": ""
},
"Links": null,
"Aliases": null,
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "300c2391fbf7a793261c8a44888e7fc898256c3ddc3ed8c9dc4987126019541c",
"EndpointID": "7cb161253f17eaa87e372155b28d897ea3742d100fc2c51b11787e5e14e7c0fa",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"MacAddress": "6a:27:8d:3e:7f:68",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"functional-169724",
"6aca14b45406"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p functional-169724 -n functional-169724
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run: out/minikube-linux-amd64 -p functional-169724 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-169724 logs -n 25: (1.080085311s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd logs:
-- stdout --
==> Audit <==
┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ image │ functional-169724 image ls --format yaml --alsologtostderr │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
│ image │ functional-169724 image ls │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
│ image │ functional-169724 image ls --format short --alsologtostderr │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ │
│ cp │ functional-169724 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
│ image │ functional-169724 image load --daemon kicbase/echo-server:functional-169724 --alsologtostderr │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
│ ssh │ functional-169724 ssh -n functional-169724 sudo cat /tmp/does/not/exist/cp-test.txt │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
│ image │ functional-169724 image ls │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
│ image │ functional-169724 image load --daemon kicbase/echo-server:functional-169724 --alsologtostderr │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
│ update-context │ functional-169724 update-context --alsologtostderr -v=2 │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
│ update-context │ functional-169724 update-context --alsologtostderr -v=2 │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
│ update-context │ functional-169724 update-context --alsologtostderr -v=2 │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ │
│ image │ functional-169724 image ls │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
│ image │ functional-169724 image save kicbase/echo-server:functional-169724 /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
│ image │ functional-169724 image rm kicbase/echo-server:functional-169724 --alsologtostderr │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
│ image │ functional-169724 image ls │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
│ image │ functional-169724 image load /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
│ image │ functional-169724 image ls │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
│ image │ functional-169724 image save --daemon kicbase/echo-server:functional-169724 --alsologtostderr │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
│ image │ functional-169724 image ls --format yaml --alsologtostderr │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ │
│ image │ functional-169724 image ls --format short --alsologtostderr │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
│ ssh │ functional-169724 ssh pgrep buildkitd │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ │
│ image │ functional-169724 image ls --format json --alsologtostderr │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
│ image │ functional-169724 image ls --format table --alsologtostderr │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
│ image │ functional-169724 image build -t localhost/my-image:functional-169724 testdata/build --alsologtostderr │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
│ image │ functional-169724 image ls │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/12/02 15:21:47
Running on machine: ubuntu-20-agent-14
Binary: Built with gc go1.25.3 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1202 15:21:47.314586 641068 out.go:360] Setting OutFile to fd 1 ...
I1202 15:21:47.314684 641068 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:21:47.314692 641068 out.go:374] Setting ErrFile to fd 2...
I1202 15:21:47.314696 641068 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:21:47.314930 641068 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-563346/.minikube/bin
I1202 15:21:47.315423 641068 out.go:368] Setting JSON to false
I1202 15:21:47.316714 641068 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":7456,"bootTime":1764681451,"procs":241,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1202 15:21:47.316780 641068 start.go:143] virtualization: kvm guest
I1202 15:21:47.318463 641068 out.go:179] * [functional-169724] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
I1202 15:21:47.319518 641068 out.go:179] - MINIKUBE_LOCATION=22021
I1202 15:21:47.319534 641068 notify.go:221] Checking for updates...
I1202 15:21:47.322273 641068 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1202 15:21:47.323401 641068 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22021-563346/kubeconfig
I1202 15:21:47.324434 641068 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-563346/.minikube
I1202 15:21:47.325717 641068 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I1202 15:21:47.327279 641068 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1202 15:21:47.329086 641068 config.go:182] Loaded profile config "functional-169724": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1202 15:21:47.329971 641068 driver.go:422] Setting default libvirt URI to qemu:///system
I1202 15:21:47.357518 641068 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
I1202 15:21:47.357625 641068 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1202 15:21:47.421491 641068 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-02 15:21:47.409816405 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1202 15:21:47.421613 641068 docker.go:319] overlay module found
I1202 15:21:47.423070 641068 out.go:179] * Using the docker driver based on existing profile
I1202 15:21:47.424115 641068 start.go:309] selected driver: docker
I1202 15:21:47.424136 641068 start.go:927] validating driver "docker" against &{Name:functional-169724 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-169724 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1202 15:21:47.424267 641068 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1202 15:21:47.424387 641068 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1202 15:21:47.481154 641068 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-02 15:21:47.471959518 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1202 15:21:47.481877 641068 cni.go:84] Creating CNI manager for ""
I1202 15:21:47.481949 641068 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1202 15:21:47.482010 641068 start.go:353] cluster config:
{Name:functional-169724 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-169724 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1202 15:21:47.483575 641068 out.go:179] * dry-run validation complete!
==> Docker <==
Dec 02 15:21:59 functional-169724 dockerd[7755]: time="2025-12-02T15:21:59.403585379Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Dec 02 15:21:59 functional-169724 dockerd[7755]: time="2025-12-02T15:21:59.512715955Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Dec 02 15:22:04 functional-169724 dockerd[7755]: time="2025-12-02T15:22:04.189605532Z" level=info msg="ignoring event" container=c5d7a555311c73af0dd6785b30aed59022734dad3efecbe1b29afbb99054221b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Dec 02 15:22:04 functional-169724 dockerd[7755]: time="2025-12-02T15:22:04.357591630Z" level=info msg="ignoring event" container=6557df140b51fb2026fd7881cfc0f05e861ca1d97f10929fea6f284a83907449 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Dec 02 15:22:05 functional-169724 dockerd[7755]: time="2025-12-02T15:22:05.467560101Z" level=info msg="sbJoin: gwep4 ''->'', gwep6 ''->''" eid=b32f1c40ba5d ep=k8s_POD_sp-pod_default_d8340d35-b3d6-4326-b084-210d089189c8_0 net=none nid=41dbb18abfc2
Dec 02 15:22:05 functional-169724 cri-dockerd[8507]: time="2025-12-02T15:22:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/cb749a039999dd2401a723e0c1e60d5a92994ce685ff47c06d4aef5217e81059/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
Dec 02 15:22:05 functional-169724 cri-dockerd[8507]: time="2025-12-02T15:22:05Z" level=info msg="Stop pulling image docker.io/nginx:latest: Status: Image is up to date for nginx:latest"
Dec 02 15:22:12 functional-169724 dockerd[7755]: time="2025-12-02T15:22:12.297492774Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Dec 02 15:22:13 functional-169724 dockerd[7755]: time="2025-12-02T15:22:13.199739095Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Dec 02 15:22:13 functional-169724 dockerd[7755]: time="2025-12-02T15:22:13.231312464Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Dec 02 15:22:17 functional-169724 dockerd[7755]: 2025/12/02 15:22:17 http2: server: error reading preface from client @: read unix /var/run/docker.sock->@: read: connection reset by peer
Dec 02 15:22:18 functional-169724 dockerd[7755]: time="2025-12-02T15:22:18.824698620Z" level=info msg="sbJoin: gwep4 ''->'4689289d919b', gwep6 ''->''"
Dec 02 15:22:30 functional-169724 cri-dockerd[8507]: time="2025-12-02T15:22:30Z" level=error msg="error getting RW layer size for container ID '981a49c19bbfceeb360f21f717e70de1549aecb3102ef304675c9c5f199d96d8': Error response from daemon: No such container: 981a49c19bbfceeb360f21f717e70de1549aecb3102ef304675c9c5f199d96d8"
Dec 02 15:22:30 functional-169724 cri-dockerd[8507]: time="2025-12-02T15:22:30Z" level=error msg="Set backoffDuration to : 1m0s for container ID '981a49c19bbfceeb360f21f717e70de1549aecb3102ef304675c9c5f199d96d8'"
Dec 02 15:22:38 functional-169724 dockerd[7755]: time="2025-12-02T15:22:38.200382357Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Dec 02 15:22:38 functional-169724 dockerd[7755]: time="2025-12-02T15:22:38.292599059Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Dec 02 15:22:38 functional-169724 cri-dockerd[8507]: time="2025-12-02T15:22:38Z" level=info msg="Stop pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: Pulling from kubernetesui/dashboard"
Dec 02 15:22:39 functional-169724 dockerd[7755]: time="2025-12-02T15:22:39.272473055Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Dec 02 15:23:27 functional-169724 dockerd[7755]: time="2025-12-02T15:23:27.281691411Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Dec 02 15:23:31 functional-169724 dockerd[7755]: time="2025-12-02T15:23:31.198964490Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Dec 02 15:23:31 functional-169724 dockerd[7755]: time="2025-12-02T15:23:31.235678685Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Dec 02 15:24:55 functional-169724 dockerd[7755]: time="2025-12-02T15:24:55.200258483Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Dec 02 15:24:55 functional-169724 dockerd[7755]: time="2025-12-02T15:24:55.295911943Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Dec 02 15:24:55 functional-169724 cri-dockerd[8507]: time="2025-12-02T15:24:55Z" level=info msg="Stop pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: Pulling from kubernetesui/dashboard"
Dec 02 15:24:57 functional-169724 dockerd[7755]: time="2025-12-02T15:24:57.274599396Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
20584dd03c3ce nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42 4 minutes ago Running myfrontend 0 cb749a039999d sp-pod default
c092a076c571e kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c 4 minutes ago Running dashboard-metrics-scraper 0 4cc12c5b503a8 dashboard-metrics-scraper-5565989548-zhkml kubernetes-dashboard
c039b5e9295d1 kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 5 minutes ago Running echo-server 0 ed459e09aa4d8 hello-node-5758569b79-l9dzq default
67e69e5288bb6 kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 5 minutes ago Running echo-server 0 4f35bc7e21ab4 hello-node-connect-9f67c86d4-j55l8 default
8cf8bb66fc675 gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e 5 minutes ago Exited mount-munger 0 87dc69f17ae61 busybox-mount default
0cb8e8eb6c98c nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14 5 minutes ago Running nginx 0 37aab5a80f061 nginx-svc default
4c5dab5ebf4f4 aa5e3ebc0dfed 5 minutes ago Running coredns 2 2aee0a7ebd601 coredns-7d764666f9-nd8zq kube-system
ba9422ce22e85 8a4ded35a3eb1 5 minutes ago Running kube-proxy 3 5e0a4a8d35c60 kube-proxy-d9lr9 kube-system
6d5ae1ae06bc8 6e38f40d628db 5 minutes ago Running storage-provisioner 4 4324084172f5b storage-provisioner kube-system
97abb5500abc1 7bb6219ddab95 5 minutes ago Running kube-scheduler 3 acb10ef6ba257 kube-scheduler-functional-169724 kube-system
be942be84ea2e 45f3cc72d235f 5 minutes ago Running kube-controller-manager 3 8575ae5201a0a kube-controller-manager-functional-169724 kube-system
11d06fb246a49 a3e246e9556e9 5 minutes ago Running etcd 2 6f5be81b7f3fb etcd-functional-169724 kube-system
29cc4a6e2e615 aa9d02839d8de 5 minutes ago Running kube-apiserver 0 5fee737043a1b kube-apiserver-functional-169724 kube-system
7edd953743733 8a4ded35a3eb1 5 minutes ago Exited kube-proxy 2 96b84bc5192e0 kube-proxy-d9lr9 kube-system
cdcf8eef29831 45f3cc72d235f 5 minutes ago Exited kube-controller-manager 2 e6f69ca09701e kube-controller-manager-functional-169724 kube-system
71e80cbfbdd27 7bb6219ddab95 5 minutes ago Exited kube-scheduler 2 f8f59ee5d6caf kube-scheduler-functional-169724 kube-system
55b0ea4b79d49 6e38f40d628db 6 minutes ago Created storage-provisioner 3 4adb513a00693 storage-provisioner kube-system
1d0a06c5343ab aa5e3ebc0dfed 6 minutes ago Exited coredns 1 209d9310937be coredns-7d764666f9-nd8zq kube-system
e44038096d500 a3e246e9556e9 6 minutes ago Exited etcd 1 f7e26095932d1 etcd-functional-169724 kube-system
==> coredns [1d0a06c5343a] <==
maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
.:53
[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
CoreDNS-1.13.1
linux/amd64, go1.25.2, 1db4568
[INFO] 127.0.0.1:41181 - 48782 "HINFO IN 8599916324902243977.3856247845758161228. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.021557542s
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
==> coredns [4c5dab5ebf4f] <==
maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
.:53
[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
CoreDNS-1.13.1
linux/amd64, go1.25.2, 1db4568
[INFO] 127.0.0.1:42127 - 12039 "HINFO IN 2776022828749691506.5150390252904704739. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.046412032s
==> describe nodes <==
Name: functional-169724
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=functional-169724
kubernetes.io/os=linux
minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689
minikube.k8s.io/name=functional-169724
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_12_02T15_19_02_0700
minikube.k8s.io/version=v1.37.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 02 Dec 2025 15:18:59 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: functional-169724
AcquireTime: <unset>
RenewTime: Tue, 02 Dec 2025 15:26:49 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Tue, 02 Dec 2025 15:25:58 +0000 Tue, 02 Dec 2025 15:18:58 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Tue, 02 Dec 2025 15:25:58 +0000 Tue, 02 Dec 2025 15:18:58 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Tue, 02 Dec 2025 15:25:58 +0000 Tue, 02 Dec 2025 15:18:58 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Tue, 02 Dec 2025 15:25:58 +0000 Tue, 02 Dec 2025 15:19:06 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.49.2
Hostname: functional-169724
Capacity:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32863356Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32863356Ki
pods: 110
System Info:
Machine ID: c31a325af81b969158c21fa769271857
System UUID: 63b0e81a-4f10-411a-8755-281b0479e5a4
Boot ID: bd6d4341-b6ad-469b-96fd-32b547c9d299
Kernel Version: 6.8.0-1044-gcp
OS Image: Debian GNU/Linux 12 (bookworm)
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://29.0.4
Kubelet Version: v1.35.0-beta.0
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (14 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default hello-node-5758569b79-l9dzq 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m4s
default hello-node-connect-9f67c86d4-j55l8 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m11s
default mysql-844cf969f6-dbl2r 600m (7%) 700m (8%) 512Mi (1%) 700Mi (2%) 5m
default nginx-svc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m12s
default sp-pod 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m53s
kube-system coredns-7d764666f9-nd8zq 100m (1%) 0 (0%) 70Mi (0%) 170Mi (0%) 7m51s
kube-system etcd-functional-169724 100m (1%) 0 (0%) 100Mi (0%) 0 (0%) 7m57s
kube-system kube-apiserver-functional-169724 250m (3%) 0 (0%) 0 (0%) 0 (0%) 5m35s
kube-system kube-controller-manager-functional-169724 200m (2%) 0 (0%) 0 (0%) 0 (0%) 7m57s
kube-system kube-proxy-d9lr9 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m51s
kube-system kube-scheduler-functional-169724 100m (1%) 0 (0%) 0 (0%) 0 (0%) 7m57s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m51s
kubernetes-dashboard dashboard-metrics-scraper-5565989548-zhkml 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m
kubernetes-dashboard kubernetes-dashboard-b84665fb8-z9ghp 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 1350m (16%) 700m (8%)
memory 682Mi (2%) 870Mi (2%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal RegisteredNode 7m52s node-controller Node functional-169724 event: Registered Node functional-169724 in Controller
Normal RegisteredNode 6m37s node-controller Node functional-169724 event: Registered Node functional-169724 in Controller
Normal RegisteredNode 5m33s node-controller Node functional-169724 event: Registered Node functional-169724 in Controller
==> dmesg <==
[ +0.000004] ll header: 00000000: ff ff ff ff ff ff aa c4 8b 72 23 67 08 06
[Dec 2 15:12] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
[ +0.000009] ll header: 00000000: ff ff ff ff ff ff ce 33 66 cf 68 fe 08 06
[ +0.000567] IPv4: martian source 10.244.0.32 from 10.244.0.3, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff aa c4 8b 72 23 67 08 06
[ +0.000756] IPv4: martian source 10.244.0.32 from 10.244.0.7, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 fc 13 70 ef 7a 08 06
[Dec 2 15:13] IPv4: martian source 10.244.0.31 from 10.244.0.25, on dev eth0
[ +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 6c d3 f0 2d 4b 08 06
[Dec 2 15:15] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
[ +0.000009] ll header: 00000000: ff ff ff ff ff ff 3e e9 14 7c 53 c5 08 06
[Dec 2 15:16] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff c6 9e 41 2b 99 a9 08 06
[Dec 2 15:17] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
[ +0.000012] ll header: 00000000: ff ff ff ff ff ff 3e 17 2b 55 09 b0 08 06
[Dec 2 15:18] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff 1a 5d a4 9c b5 12 08 06
[Dec 2 15:19] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff 9e 1e 7c 51 67 ed 08 06
[ +0.136746] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff ce f0 27 e7 63 6f 08 06
[Dec 2 15:20] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
[ +0.000009] ll header: 00000000: ff ff ff ff ff ff 66 64 43 d3 ea d3 08 06
[Dec 2 15:21] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e d7 73 53 48 b3 08 06
==> etcd [11d06fb246a4] <==
{"level":"warn","ts":"2025-12-02T15:21:21.911190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53688","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-02T15:21:21.921919Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53710","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-02T15:21:21.929780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53740","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-02T15:21:21.937421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53754","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-02T15:21:21.944962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53780","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-02T15:21:21.953046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53816","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-02T15:21:21.959409Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53840","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-02T15:21:21.965653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53844","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-02T15:21:21.973362Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53862","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-02T15:21:21.983357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53872","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-02T15:21:21.989862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53894","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-02T15:21:21.996721Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53922","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-02T15:21:22.003414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53946","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-02T15:21:22.010147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53982","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-02T15:21:22.017337Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53988","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-02T15:21:22.024086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54002","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-02T15:21:22.030647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54030","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-02T15:21:22.037289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54034","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-02T15:21:22.044122Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54054","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-02T15:21:22.056084Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54058","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-02T15:21:22.068845Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54074","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-02T15:21:22.075260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54090","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-02T15:21:22.081372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54108","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-02T15:21:22.088112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54130","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-02T15:21:22.131529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54148","server-name":"","error":"EOF"}
==> etcd [e44038096d50] <==
{"level":"warn","ts":"2025-12-02T15:20:18.102944Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57784","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-02T15:20:18.110240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57810","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-02T15:20:18.130716Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57832","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-02T15:20:18.137543Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57852","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-02T15:20:18.145240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57868","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-02T15:20:18.152060Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57890","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-02T15:20:18.197540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57900","server-name":"","error":"EOF"}
{"level":"info","ts":"2025-12-02T15:21:06.533986Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2025-12-02T15:21:06.534077Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-169724","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
{"level":"error","ts":"2025-12-02T15:21:06.534253Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
{"level":"error","ts":"2025-12-02T15:21:13.535652Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
{"level":"error","ts":"2025-12-02T15:21:13.535753Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"info","ts":"2025-12-02T15:21:13.535814Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
{"level":"info","ts":"2025-12-02T15:21:13.535886Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
{"level":"info","ts":"2025-12-02T15:21:13.535942Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
{"level":"warn","ts":"2025-12-02T15:21:13.535954Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"warn","ts":"2025-12-02T15:21:13.536036Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"error","ts":"2025-12-02T15:21:13.536046Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"warn","ts":"2025-12-02T15:21:13.536115Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
{"level":"warn","ts":"2025-12-02T15:21:13.536141Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
{"level":"error","ts":"2025-12-02T15:21:13.536150Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"info","ts":"2025-12-02T15:21:13.539423Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
{"level":"error","ts":"2025-12-02T15:21:13.539485Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"info","ts":"2025-12-02T15:21:13.539515Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2025-12-02T15:21:13.539521Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-169724","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
==> kernel <==
15:26:58 up 2:09, 0 user, load average: 0.07, 0.40, 1.25
Linux functional-169724 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
==> kube-apiserver [29cc4a6e2e61] <==
I1202 15:21:22.633196 1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
I1202 15:21:22.638620 1 shared_informer.go:377] "Caches are synced"
I1202 15:21:22.638638 1 policy_source.go:248] refreshing policies
I1202 15:21:22.640385 1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
I1202 15:21:23.245488 1 controller.go:667] quota admission added evaluator for: serviceaccounts
I1202 15:21:23.245487 1 controller.go:667] quota admission added evaluator for: serviceaccounts
I1202 15:21:23.245488 1 controller.go:667] quota admission added evaluator for: serviceaccounts
I1202 15:21:23.488408 1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
I1202 15:21:24.334953 1 controller.go:667] quota admission added evaluator for: deployments.apps
I1202 15:21:24.371368 1 controller.go:667] quota admission added evaluator for: daemonsets.apps
I1202 15:21:24.398800 1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1202 15:21:24.404715 1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I1202 15:21:25.966967 1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
I1202 15:21:26.017714 1 controller.go:667] quota admission added evaluator for: endpoints
I1202 15:21:40.039541 1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.110.181.51"}
I1202 15:21:46.683320 1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.102.106.161"}
I1202 15:21:47.549397 1 controller.go:667] quota admission added evaluator for: replicasets.apps
I1202 15:21:47.631670 1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.99.130.65"}
I1202 15:21:55.079754 1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.108.134.79"}
I1202 15:21:57.923307 1 controller.go:667] quota admission added evaluator for: namespaces
I1202 15:21:58.062074 1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.110.134"}
I1202 15:21:58.073491 1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.118.166"}
I1202 15:21:58.545355 1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.107.51.162"}
E1202 15:22:04.080451 1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:52156: use of closed network connection
E1202 15:22:11.189951 1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:39652: use of closed network connection
==> kube-controller-manager [be942be84ea2] <==
I1202 15:21:25.726929 1 shared_informer.go:377] "Caches are synced"
I1202 15:21:25.728133 1 shared_informer.go:377] "Caches are synced"
I1202 15:21:25.728337 1 shared_informer.go:377] "Caches are synced"
I1202 15:21:25.728514 1 shared_informer.go:377] "Caches are synced"
I1202 15:21:25.728396 1 shared_informer.go:377] "Caches are synced"
I1202 15:21:25.728642 1 shared_informer.go:377] "Caches are synced"
I1202 15:21:25.728743 1 shared_informer.go:377] "Caches are synced"
I1202 15:21:25.728786 1 shared_informer.go:377] "Caches are synced"
I1202 15:21:25.728826 1 shared_informer.go:377] "Caches are synced"
I1202 15:21:25.729297 1 shared_informer.go:377] "Caches are synced"
I1202 15:21:25.728920 1 shared_informer.go:377] "Caches are synced"
I1202 15:21:25.728944 1 shared_informer.go:377] "Caches are synced"
I1202 15:21:25.728940 1 shared_informer.go:377] "Caches are synced"
I1202 15:21:25.730174 1 shared_informer.go:377] "Caches are synced"
I1202 15:21:25.731415 1 shared_informer.go:370] "Waiting for caches to sync"
I1202 15:21:25.826011 1 shared_informer.go:377] "Caches are synced"
I1202 15:21:25.826041 1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
I1202 15:21:25.826046 1 garbagecollector.go:169] "Proceeding to collect garbage"
I1202 15:21:25.831694 1 shared_informer.go:377] "Caches are synced"
E1202 15:21:57.984423 1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1202 15:21:57.989421 1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1202 15:21:57.993493 1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1202 15:21:57.993617 1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1202 15:21:57.999312 1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1202 15:21:58.004413 1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
==> kube-controller-manager [cdcf8eef2983] <==
I1202 15:21:19.058440 1 serving.go:386] Generated self-signed cert in-memory
I1202 15:21:19.065566 1 controllermanager.go:189] "Starting" version="v1.35.0-beta.0"
I1202 15:21:19.065592 1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1202 15:21:19.067249 1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I1202 15:21:19.067342 1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I1202 15:21:19.067442 1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
I1202 15:21:19.067513 1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
==> kube-proxy [7edd95374373] <==
I1202 15:21:18.848250 1 server_linux.go:53] "Using iptables proxy"
I1202 15:21:18.932703 1 shared_informer.go:370] "Waiting for caches to sync"
==> kube-proxy [ba9422ce22e8] <==
I1202 15:21:23.842878 1 server_linux.go:53] "Using iptables proxy"
I1202 15:21:23.905118 1 shared_informer.go:370] "Waiting for caches to sync"
I1202 15:21:24.005691 1 shared_informer.go:377] "Caches are synced"
I1202 15:21:24.005731 1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
E1202 15:21:24.005857 1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1202 15:21:24.028724 1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I1202 15:21:24.028780 1 server_linux.go:136] "Using iptables Proxier"
I1202 15:21:24.034388 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1202 15:21:24.034689 1 server.go:529] "Version info" version="v1.35.0-beta.0"
I1202 15:21:24.034705 1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1202 15:21:24.035812 1 config.go:309] "Starting node config controller"
I1202 15:21:24.035872 1 config.go:403] "Starting serviceCIDR config controller"
I1202 15:21:24.035884 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1202 15:21:24.035931 1 shared_informer.go:356] "Caches are synced" controller="node config"
I1202 15:21:24.035944 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1202 15:21:24.036053 1 config.go:106] "Starting endpoint slice config controller"
I1202 15:21:24.036100 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1202 15:21:24.036053 1 config.go:200] "Starting service config controller"
I1202 15:21:24.036153 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1202 15:21:24.136338 1 shared_informer.go:356] "Caches are synced" controller="service config"
I1202 15:21:24.136394 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
I1202 15:21:24.136366 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
==> kube-scheduler [71e80cbfbdd2] <==
I1202 15:21:18.927381 1 serving.go:386] Generated self-signed cert in-memory
W1202 15:21:18.930266 1 authentication.go:397] Error looking up in-cluster authentication configuration: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.49.2:8441: connect: connection refused
W1202 15:21:18.930311 1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
W1202 15:21:18.930323 1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I1202 15:21:18.940239 1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
I1202 15:21:18.940272 1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1202 15:21:18.942264 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1202 15:21:18.942303 1 shared_informer.go:370] "Waiting for caches to sync"
I1202 15:21:18.942471 1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
I1202 15:21:18.942618 1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
E1202 15:21:19.180778 1 server.go:286] "handlers are not fully synchronized" err="context canceled"
I1202 15:21:19.181243 1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
I1202 15:21:19.181284 1 server.go:263] "[graceful-termination] secure server has stopped listening"
I1202 15:21:19.181320 1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
E1202 15:21:19.181366 1 shared_informer.go:373] "Unable to sync caches" logger="UnhandledError"
I1202 15:21:19.181379 1 configmap_cafile_content.go:213] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1202 15:21:19.181425 1 server.go:265] "[graceful-termination] secure server is exiting"
E1202 15:21:19.181693 1 run.go:72] "command failed" err="finished without leader elect"
==> kube-scheduler [97abb5500abc] <==
I1202 15:21:22.037813 1 serving.go:386] Generated self-signed cert in-memory
W1202 15:21:22.524983 1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W1202 15:21:22.525288 1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W1202 15:21:22.525444 1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
W1202 15:21:22.525565 1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I1202 15:21:22.546981 1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
I1202 15:21:22.547008 1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1202 15:21:22.549051 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1202 15:21:22.549078 1 shared_informer.go:370] "Waiting for caches to sync"
I1202 15:21:22.549233 1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
I1202 15:21:22.549266 1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
I1202 15:21:22.650016 1 shared_informer.go:377] "Caches are synced"
==> kubelet <==
Dec 02 15:25:42 functional-169724 kubelet[9712]: E1202 15:25:42.180701 9712 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-169724" containerName="etcd"
Dec 02 15:25:44 functional-169724 kubelet[9712]: E1202 15:25:44.182969 9712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-dbl2r" podUID="a5b42979-2765-4023-afe8-d83c7d58c712"
Dec 02 15:25:47 functional-169724 kubelet[9712]: E1202 15:25:47.180056 9712 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-z9ghp" containerName="kubernetes-dashboard"
Dec 02 15:25:47 functional-169724 kubelet[9712]: E1202 15:25:47.182477 9712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-z9ghp" podUID="7f04674d-ee7f-47b3-a9cb-9b205a1ddcd4"
Dec 02 15:25:55 functional-169724 kubelet[9712]: E1202 15:25:55.182641 9712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-dbl2r" podUID="a5b42979-2765-4023-afe8-d83c7d58c712"
Dec 02 15:26:02 functional-169724 kubelet[9712]: E1202 15:26:02.179662 9712 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-169724" containerName="kube-scheduler"
Dec 02 15:26:02 functional-169724 kubelet[9712]: E1202 15:26:02.179788 9712 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-z9ghp" containerName="kubernetes-dashboard"
Dec 02 15:26:02 functional-169724 kubelet[9712]: E1202 15:26:02.182389 9712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-z9ghp" podUID="7f04674d-ee7f-47b3-a9cb-9b205a1ddcd4"
Dec 02 15:26:07 functional-169724 kubelet[9712]: E1202 15:26:07.179872 9712 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-zhkml" containerName="dashboard-metrics-scraper"
Dec 02 15:26:09 functional-169724 kubelet[9712]: E1202 15:26:09.184047 9712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-dbl2r" podUID="a5b42979-2765-4023-afe8-d83c7d58c712"
Dec 02 15:26:11 functional-169724 kubelet[9712]: E1202 15:26:11.180477 9712 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-169724" containerName="kube-apiserver"
Dec 02 15:26:17 functional-169724 kubelet[9712]: E1202 15:26:17.180264 9712 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-z9ghp" containerName="kubernetes-dashboard"
Dec 02 15:26:17 functional-169724 kubelet[9712]: E1202 15:26:17.183376 9712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-z9ghp" podUID="7f04674d-ee7f-47b3-a9cb-9b205a1ddcd4"
Dec 02 15:26:19 functional-169724 kubelet[9712]: E1202 15:26:19.179711 9712 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-169724" containerName="kube-controller-manager"
Dec 02 15:26:20 functional-169724 kubelet[9712]: E1202 15:26:20.185742 9712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-dbl2r" podUID="a5b42979-2765-4023-afe8-d83c7d58c712"
Dec 02 15:26:32 functional-169724 kubelet[9712]: E1202 15:26:32.179772 9712 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-z9ghp" containerName="kubernetes-dashboard"
Dec 02 15:26:32 functional-169724 kubelet[9712]: E1202 15:26:32.182145 9712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-z9ghp" podUID="7f04674d-ee7f-47b3-a9cb-9b205a1ddcd4"
Dec 02 15:26:33 functional-169724 kubelet[9712]: E1202 15:26:33.182378 9712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-dbl2r" podUID="a5b42979-2765-4023-afe8-d83c7d58c712"
Dec 02 15:26:41 functional-169724 kubelet[9712]: E1202 15:26:41.180025 9712 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-nd8zq" containerName="coredns"
Dec 02 15:26:44 functional-169724 kubelet[9712]: E1202 15:26:44.193922 9712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-dbl2r" podUID="a5b42979-2765-4023-afe8-d83c7d58c712"
Dec 02 15:26:47 functional-169724 kubelet[9712]: E1202 15:26:47.179560 9712 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-z9ghp" containerName="kubernetes-dashboard"
Dec 02 15:26:47 functional-169724 kubelet[9712]: E1202 15:26:47.182080 9712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-z9ghp" podUID="7f04674d-ee7f-47b3-a9cb-9b205a1ddcd4"
Dec 02 15:26:57 functional-169724 kubelet[9712]: E1202 15:26:57.182605 9712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-dbl2r" podUID="a5b42979-2765-4023-afe8-d83c7d58c712"
Dec 02 15:26:58 functional-169724 kubelet[9712]: E1202 15:26:58.180086 9712 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-z9ghp" containerName="kubernetes-dashboard"
Dec 02 15:26:58 functional-169724 kubelet[9712]: E1202 15:26:58.182489 9712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-z9ghp" podUID="7f04674d-ee7f-47b3-a9cb-9b205a1ddcd4"
==> storage-provisioner [55b0ea4b79d4] <==
==> storage-provisioner [6d5ae1ae06bc] <==
W1202 15:26:32.404371 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:26:34.407799 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:26:34.411509 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:26:36.414750 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:26:36.419288 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:26:38.422796 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:26:38.428146 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:26:40.431203 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:26:40.435145 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:26:42.438283 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:26:42.443630 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:26:44.447172 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:26:44.451264 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:26:46.454750 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:26:46.461705 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:26:48.464838 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:26:48.468625 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:26:50.471998 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:26:50.475957 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:26:52.479215 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:26:52.484274 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:26:54.487584 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:26:54.491806 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:26:56.495391 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1202 15:26:56.499456 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
-- /stdout --
helpers_test.go:262: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-169724 -n functional-169724
helpers_test.go:269: (dbg) Run: kubectl --context functional-169724 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount mysql-844cf969f6-dbl2r kubernetes-dashboard-b84665fb8-z9ghp
helpers_test.go:282: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run: kubectl --context functional-169724 describe pod busybox-mount mysql-844cf969f6-dbl2r kubernetes-dashboard-b84665fb8-z9ghp
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-169724 describe pod busybox-mount mysql-844cf969f6-dbl2r kubernetes-dashboard-b84665fb8-z9ghp: exit status 1 (73.469356ms)
-- stdout --
Name: busybox-mount
Namespace: default
Priority: 0
Service Account: default
Node: functional-169724/192.168.49.2
Start Time: Tue, 02 Dec 2025 15:21:46 +0000
Labels: integration-test=busybox-mount
Annotations: <none>
Status: Succeeded
IP: 10.244.0.10
IPs:
IP: 10.244.0.10
Containers:
mount-munger:
Container ID: docker://8cf8bb66fc675c3aadf1d2407e671757956904149a59af8e6fbd98ead10f012c
Image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID: docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
Port: <none>
Host Port: <none>
Command:
/bin/sh
-c
--
Args:
cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 02 Dec 2025 15:21:50 +0000
Finished: Tue, 02 Dec 2025 15:21:50 +0000
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/mount-9p from test-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5cbd6 (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
test-volume:
Type: HostPath (bare host directory volume)
Path: /mount-9p
HostPathType:
kube-api-access-5cbd6:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m12s default-scheduler Successfully assigned default/busybox-mount to functional-169724
Normal Pulling 5m11s kubelet Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Normal Pulled 5m8s kubelet Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.375s (3.098s including waiting). Image size: 4403845 bytes.
Normal Created 5m8s kubelet Container created
Normal Started 5m8s kubelet Container started
Name: mysql-844cf969f6-dbl2r
Namespace: default
Priority: 0
Service Account: default
Node: functional-169724/192.168.49.2
Start Time: Tue, 02 Dec 2025 15:21:58 +0000
Labels: app=mysql
pod-template-hash=844cf969f6
Annotations: <none>
Status: Pending
IP: 10.244.0.16
IPs:
IP: 10.244.0.16
Controlled By: ReplicaSet/mysql-844cf969f6
Containers:
mysql:
Container ID:
Image: docker.io/mysql:5.7
Image ID:
Port: 3306/TCP (mysql)
Host Port: 0/TCP (mysql)
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Limits:
cpu: 700m
memory: 700Mi
Requests:
cpu: 600m
memory: 512Mi
Environment:
MYSQL_ROOT_PASSWORD: password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2klr6 (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-2klr6:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m default-scheduler Successfully assigned default/mysql-844cf969f6-dbl2r to functional-169724
Normal Pulling 2m1s (x5 over 4m59s) kubelet Pulling image "docker.io/mysql:5.7"
Warning Failed 2m1s (x5 over 4m59s) kubelet Failed to pull image "docker.io/mysql:5.7": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning Failed 2m1s (x5 over 4m59s) kubelet Error: ErrImagePull
Warning Failed 74s (x15 over 4m59s) kubelet Error: ImagePullBackOff
Normal BackOff 14s (x20 over 4m59s) kubelet Back-off pulling image "docker.io/mysql:5.7"
-- /stdout --
** stderr **
Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-z9ghp" not found
** /stderr **
helpers_test.go:287: kubectl --context functional-169724 describe pod busybox-mount mysql-844cf969f6-dbl2r kubernetes-dashboard-b84665fb8-z9ghp: exit status 1
E1202 15:27:55.507071 567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/functional-049660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:28:23.210141 567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/functional-049660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:31:48.809748 567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/addons-029941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (302.07s)