=== RUN TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd
=== CONT TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-113333 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-113333 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-113333 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-113333 --alsologtostderr -v=1] stderr:
I0929 11:20:10.865322 411898 out.go:360] Setting OutFile to fd 1 ...
I0929 11:20:10.865597 411898 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:20:10.865607 411898 out.go:374] Setting ErrFile to fd 2...
I0929 11:20:10.865612 411898 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:20:10.865811 411898 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-357219/.minikube/bin
I0929 11:20:10.866138 411898 mustload.go:65] Loading cluster: functional-113333
I0929 11:20:10.866538 411898 config.go:182] Loaded profile config "functional-113333": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0929 11:20:10.866997 411898 cli_runner.go:164] Run: docker container inspect functional-113333 --format={{.State.Status}}
I0929 11:20:10.886995 411898 host.go:66] Checking if "functional-113333" exists ...
I0929 11:20:10.887318 411898 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0929 11:20:10.948699 411898 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-09-29 11:20:10.936262235 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0929 11:20:10.948816 411898 api_server.go:166] Checking apiserver status ...
I0929 11:20:10.948856 411898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0929 11:20:10.948923 411898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-113333
I0929 11:20:10.971777 411898 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/functional-113333/id_rsa Username:docker}
I0929 11:20:11.079131 411898 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/9442/cgroup
W0929 11:20:11.091378 411898 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/9442/cgroup: Process exited with status 1
stdout:
stderr:
I0929 11:20:11.091464 411898 ssh_runner.go:195] Run: ls
I0929 11:20:11.095491 411898 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I0929 11:20:11.099856 411898 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W0929 11:20:11.099911 411898 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I0929 11:20:11.100058 411898 config.go:182] Loaded profile config "functional-113333": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0929 11:20:11.100073 411898 addons.go:69] Setting dashboard=true in profile "functional-113333"
I0929 11:20:11.100079 411898 addons.go:238] Setting addon dashboard=true in "functional-113333"
I0929 11:20:11.100107 411898 host.go:66] Checking if "functional-113333" exists ...
I0929 11:20:11.100403 411898 cli_runner.go:164] Run: docker container inspect functional-113333 --format={{.State.Status}}
I0929 11:20:11.121079 411898 out.go:179] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0929 11:20:11.122453 411898 out.go:179] - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I0929 11:20:11.124376 411898 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0929 11:20:11.124399 411898 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0929 11:20:11.124469 411898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-113333
I0929 11:20:11.141727 411898 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/functional-113333/id_rsa Username:docker}
I0929 11:20:11.251510 411898 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0929 11:20:11.251538 411898 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0929 11:20:11.272714 411898 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0929 11:20:11.272736 411898 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0929 11:20:11.291548 411898 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0929 11:20:11.291572 411898 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0929 11:20:11.312515 411898 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0929 11:20:11.312540 411898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I0929 11:20:11.335893 411898 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0929 11:20:11.335924 411898 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0929 11:20:11.355911 411898 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0929 11:20:11.355938 411898 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0929 11:20:11.375628 411898 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0929 11:20:11.375659 411898 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0929 11:20:11.395416 411898 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0929 11:20:11.395439 411898 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0929 11:20:11.414477 411898 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0929 11:20:11.414502 411898 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0929 11:20:11.432605 411898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0929 11:20:11.883051 411898 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p functional-113333 addons enable metrics-server
I0929 11:20:11.884043 411898 addons.go:201] Writing out "functional-113333" config to set dashboard=true...
W0929 11:20:11.884315 411898 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I0929 11:20:11.885218 411898 kapi.go:59] client config for functional-113333: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21655-357219/.minikube/profiles/functional-113333/client.crt", KeyFile:"/home/jenkins/minikube-integration/21655-357219/.minikube/profiles/functional-113333/client.key", CAFile:"/home/jenkins/minikube-integration/21655-357219/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f41c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0929 11:20:11.885821 411898 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0929 11:20:11.885843 411898 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I0929 11:20:11.885851 411898 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0929 11:20:11.885861 411898 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0929 11:20:11.885867 411898 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0929 11:20:11.894655 411898 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard kubernetes-dashboard 63763807-fce1-4133-8023-0bc523388a1a 877 0 2025-09-29 11:20:11 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-09-29 11:20:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.98.241.54,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.98.241.54],IPFamilies:[IPv4],AllocateLoadBalancerN
odePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W0929 11:20:11.894835 411898 out.go:285] * Launching proxy ...
* Launching proxy ...
I0929 11:20:11.894930 411898 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-113333 proxy --port 36195]
I0929 11:20:11.895221 411898 dashboard.go:157] Waiting for kubectl to output host:port ...
I0929 11:20:11.949189 411898 dashboard.go:175] proxy stdout: Starting to serve on 127.0.0.1:36195
W0929 11:20:11.949271 411898 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I0929 11:20:11.959677 411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1d5a9c3b-02b2-46c3-a734-ef495c3a21ce] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:11 GMT]] Body:0xc0007d4780 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002b6000 TLS:<nil>}
I0929 11:20:11.959793 411898 retry.go:31] will retry after 105.29µs: Temporary Error: unexpected response code: 503
I0929 11:20:11.964293 411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1c81b15c-3cb2-4a27-9fa4-3f6d8a854643] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:11 GMT]] Body:0xc0014bc840 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bfb80 TLS:<nil>}
I0929 11:20:11.964382 411898 retry.go:31] will retry after 80.446µs: Temporary Error: unexpected response code: 503
I0929 11:20:11.970525 411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a70ea1cd-7fea-43f4-b7a4-d5682b48ede6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:11 GMT]] Body:0xc0007d4980 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bfcc0 TLS:<nil>}
I0929 11:20:11.970618 411898 retry.go:31] will retry after 163.915µs: Temporary Error: unexpected response code: 503
I0929 11:20:11.975065 411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[979c94dd-cf15-40b7-a0cb-afc1176ef9fe] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:11 GMT]] Body:0xc0007d4a00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002b7e00 TLS:<nil>}
I0929 11:20:11.975133 411898 retry.go:31] will retry after 262.248µs: Temporary Error: unexpected response code: 503
I0929 11:20:11.979398 411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b6463c17-316c-471e-9587-65f250b5a5fe] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:11 GMT]] Body:0xc0014bc980 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bfe00 TLS:<nil>}
I0929 11:20:11.979535 411898 retry.go:31] will retry after 673.432µs: Temporary Error: unexpected response code: 503
I0929 11:20:11.986589 411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[417da0f1-1431-48e9-afd8-443566220a76] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:11 GMT]] Body:0xc0007d4b40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00170a000 TLS:<nil>}
I0929 11:20:11.986718 411898 retry.go:31] will retry after 518.032µs: Temporary Error: unexpected response code: 503
I0929 11:20:11.990827 411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d030e557-6a5d-4701-8829-6e11a782084b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:11 GMT]] Body:0xc0014bca80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001698000 TLS:<nil>}
I0929 11:20:11.990914 411898 retry.go:31] will retry after 1.115122ms: Temporary Error: unexpected response code: 503
I0929 11:20:11.994581 411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8913a13a-080e-46ff-8452-ff23bc48df09] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:11 GMT]] Body:0xc0014bcb00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001698140 TLS:<nil>}
I0929 11:20:11.994686 411898 retry.go:31] will retry after 2.500646ms: Temporary Error: unexpected response code: 503
I0929 11:20:12.000576 411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b99ae79a-1e57-4f2b-b759-102ef2f22733] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:12 GMT]] Body:0xc0007d4d00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00170a140 TLS:<nil>}
I0929 11:20:12.000638 411898 retry.go:31] will retry after 2.355685ms: Temporary Error: unexpected response code: 503
I0929 11:20:12.006280 411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a926b466-a496-483d-884b-f5cc49e4f184] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:12 GMT]] Body:0xc0014bcc00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001698280 TLS:<nil>}
I0929 11:20:12.006340 411898 retry.go:31] will retry after 3.030698ms: Temporary Error: unexpected response code: 503
I0929 11:20:12.011899 411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[937e0bb0-387a-469b-a8e9-53e1f8e4c03b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:12 GMT]] Body:0xc0007d4fc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00170a280 TLS:<nil>}
I0929 11:20:12.011959 411898 retry.go:31] will retry after 5.967932ms: Temporary Error: unexpected response code: 503
I0929 11:20:12.021145 411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cca3875d-5067-4a77-b787-92756a163bee] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:12 GMT]] Body:0xc0007d57c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0016983c0 TLS:<nil>}
I0929 11:20:12.021210 411898 retry.go:31] will retry after 4.693221ms: Temporary Error: unexpected response code: 503
I0929 11:20:12.028937 411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[222e8599-df90-4d7c-899c-bb617379c70b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:12 GMT]] Body:0xc0007d58c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001698500 TLS:<nil>}
I0929 11:20:12.028996 411898 retry.go:31] will retry after 7.945867ms: Temporary Error: unexpected response code: 503
I0929 11:20:12.040000 411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[03b7a232-c811-48f9-8814-538f251cddd7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:12 GMT]] Body:0xc0014bcd00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001698640 TLS:<nil>}
I0929 11:20:12.040068 411898 retry.go:31] will retry after 23.567073ms: Temporary Error: unexpected response code: 503
I0929 11:20:12.067330 411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0ac47959-9bb3-4b96-b701-bb61c0ff7a6f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:12 GMT]] Body:0xc0015fd480 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00170a3c0 TLS:<nil>}
I0929 11:20:12.067401 411898 retry.go:31] will retry after 16.545954ms: Temporary Error: unexpected response code: 503
I0929 11:20:12.087373 411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d5b555d6-d9e9-4aad-b5c6-38ca1c256940] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:12 GMT]] Body:0xc0014bce00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000707540 TLS:<nil>}
I0929 11:20:12.087439 411898 retry.go:31] will retry after 61.273935ms: Temporary Error: unexpected response code: 503
I0929 11:20:12.152673 411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b569584d-2ee9-48fb-8b69-eafb009198dc] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:12 GMT]] Body:0xc0015fd5c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00170a500 TLS:<nil>}
I0929 11:20:12.152749 411898 retry.go:31] will retry after 68.202803ms: Temporary Error: unexpected response code: 503
I0929 11:20:12.225222 411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d74e93b4-9a99-4493-bdb2-9b7b0a0d9dfc] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:12 GMT]] Body:0xc0015fd680 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000707680 TLS:<nil>}
I0929 11:20:12.225296 411898 retry.go:31] will retry after 82.210728ms: Temporary Error: unexpected response code: 503
I0929 11:20:12.311414 411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e2b93faf-11bd-495d-9529-7cb40479f34f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:12 GMT]] Body:0xc0014bcec0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0007077c0 TLS:<nil>}
I0929 11:20:12.311494 411898 retry.go:31] will retry after 147.243651ms: Temporary Error: unexpected response code: 503
I0929 11:20:12.462763 411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ffc7f095-3b73-4f8f-b3ec-5e44a3e4434c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:12 GMT]] Body:0xc0015fd780 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00170a640 TLS:<nil>}
I0929 11:20:12.462861 411898 retry.go:31] will retry after 162.588755ms: Temporary Error: unexpected response code: 503
I0929 11:20:12.628641 411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3b921a98-7496-4b23-873a-aae50a5e9353] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:12 GMT]] Body:0xc0007d5c80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000707900 TLS:<nil>}
I0929 11:20:12.628702 411898 retry.go:31] will retry after 242.946834ms: Temporary Error: unexpected response code: 503
I0929 11:20:12.874969 411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[43de276e-0b91-4020-8ae6-69032db4bf7d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:12 GMT]] Body:0xc0014bcfc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001698780 TLS:<nil>}
I0929 11:20:12.875027 411898 retry.go:31] will retry after 495.346739ms: Temporary Error: unexpected response code: 503
I0929 11:20:13.373551 411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ad4c6322-c804-486d-8b5a-474eb19398bc] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:13 GMT]] Body:0xc0015fd8c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00170a780 TLS:<nil>}
I0929 11:20:13.373619 411898 retry.go:31] will retry after 1.047679097s: Temporary Error: unexpected response code: 503
I0929 11:20:14.425150 411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[56e8593a-ffee-411f-9e83-9c1ca2b68d2e] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:14 GMT]] Body:0xc0014bd0c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000707a40 TLS:<nil>}
I0929 11:20:14.425225 411898 retry.go:31] will retry after 1.275988625s: Temporary Error: unexpected response code: 503
I0929 11:20:15.704702 411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f86bbcc1-034c-4e80-97d1-b8398af855db] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:15 GMT]] Body:0xc0008685c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00170a8c0 TLS:<nil>}
I0929 11:20:15.704770 411898 retry.go:31] will retry after 1.44204104s: Temporary Error: unexpected response code: 503
I0929 11:20:17.149899 411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c5f4c7ad-dff2-4d2b-b641-33c0b39e1324] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:17 GMT]] Body:0xc000868740 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005aa3c0 TLS:<nil>}
I0929 11:20:17.149965 411898 retry.go:31] will retry after 3.389070842s: Temporary Error: unexpected response code: 503
I0929 11:20:20.545016 411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[bf21411c-2a0b-41b1-9d1d-de79c63ae8f1] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:20 GMT]] Body:0xc0007d5dc0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005aa500 TLS:<nil>}
I0929 11:20:20.545112 411898 retry.go:31] will retry after 4.613906702s: Temporary Error: unexpected response code: 503
I0929 11:20:25.164847 411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5c34844d-f01b-4c66-a591-13286faeba86] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:25 GMT]] Body:0xc000868940 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005aa640 TLS:<nil>}
I0929 11:20:25.164941 411898 retry.go:31] will retry after 7.574140968s: Temporary Error: unexpected response code: 503
I0929 11:20:32.742654 411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c98433e6-656c-4a1d-8cdb-e20ecae0b812] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:32 GMT]] Body:0xc0014bd1c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0016988c0 TLS:<nil>}
I0929 11:20:32.742718 411898 retry.go:31] will retry after 5.540934918s: Temporary Error: unexpected response code: 503
I0929 11:20:38.287802 411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[05992f34-d3dd-47b3-a301-a176a8bfc90c] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:38 GMT]] Body:0xc000868a40 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001698a00 TLS:<nil>}
I0929 11:20:38.287917 411898 retry.go:31] will retry after 16.896410782s: Temporary Error: unexpected response code: 503
I0929 11:20:55.188361 411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2d6d620a-3b40-4f5d-881c-8ed715c20187] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:55 GMT]] Body:0xc000868c40 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00170aa00 TLS:<nil>}
I0929 11:20:55.188434 411898 retry.go:31] will retry after 10.347207584s: Temporary Error: unexpected response code: 503
I0929 11:21:05.542204 411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b595154a-318d-4c9a-a281-5a750c413e34] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:21:05 GMT]] Body:0xc0014bd2c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005aa780 TLS:<nil>}
I0929 11:21:05.542276 411898 retry.go:31] will retry after 38.613353795s: Temporary Error: unexpected response code: 503
I0929 11:21:44.160767 411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c776ccdd-527a-4fb3-8f78-25cc1b4ea042] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:21:44 GMT]] Body:0xc000868e40 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005aac80 TLS:<nil>}
I0929 11:21:44.160855 411898 retry.go:31] will retry after 1m1.828281956s: Temporary Error: unexpected response code: 503
I0929 11:22:45.992898 411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[619da1b5-1db4-43f5-8932-26e8e15ce582] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:22:45 GMT]] Body:0xc0015fc0c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000206280 TLS:<nil>}
I0929 11:22:45.992974 411898 retry.go:31] will retry after 50.195696598s: Temporary Error: unexpected response code: 503
I0929 11:23:36.192435 411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5c76df89-7ed2-4b83-88f5-651db017321c] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:23:36 GMT]] Body:0xc0014bc040 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002063c0 TLS:<nil>}
I0929 11:23:36.192533 411898 retry.go:31] will retry after 55.964495296s: Temporary Error: unexpected response code: 503
I0929 11:24:32.160540 411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3706c8b2-ed48-42a7-8bb2-ff7c07b61c65] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:24:32 GMT]] Body:0xc0015fc0c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000206640 TLS:<nil>}
I0929 11:24:32.160659 411898 retry.go:31] will retry after 45.381762389s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run: docker inspect functional-113333
helpers_test.go:243: (dbg) docker inspect functional-113333:
-- stdout --
[
{
"Id": "0e969f65a5f53fc9264ed0e6040a8b0887260fcb65421b1fe7c9b63e9f227ba8",
"Created": "2025-09-29T11:17:04.817558805Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 391650,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-09-29T11:17:04.849941498Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
"ResolvConfPath": "/var/lib/docker/containers/0e969f65a5f53fc9264ed0e6040a8b0887260fcb65421b1fe7c9b63e9f227ba8/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/0e969f65a5f53fc9264ed0e6040a8b0887260fcb65421b1fe7c9b63e9f227ba8/hostname",
"HostsPath": "/var/lib/docker/containers/0e969f65a5f53fc9264ed0e6040a8b0887260fcb65421b1fe7c9b63e9f227ba8/hosts",
"LogPath": "/var/lib/docker/containers/0e969f65a5f53fc9264ed0e6040a8b0887260fcb65421b1fe7c9b63e9f227ba8/0e969f65a5f53fc9264ed0e6040a8b0887260fcb65421b1fe7c9b63e9f227ba8-json.log",
"Name": "/functional-113333",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"functional-113333:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {
"max-size": "100m"
}
},
"NetworkMode": "functional-113333",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "private",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 4294967296,
"NanoCpus": 0,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 8589934592,
"MemorySwappiness": null,
"OomKillDisable": null,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "0e969f65a5f53fc9264ed0e6040a8b0887260fcb65421b1fe7c9b63e9f227ba8",
"LowerDir": "/var/lib/docker/overlay2/8cc101409d56979bc21ca10fbfb120097217eddf7a810fdf2e8f2e3e78d516cb-init/diff:/var/lib/docker/overlay2/e319d2e06e0d69cee9f4fe36792c5be9fd81a6b5fefed685a6f698ba1303cb61/diff",
"MergedDir": "/var/lib/docker/overlay2/8cc101409d56979bc21ca10fbfb120097217eddf7a810fdf2e8f2e3e78d516cb/merged",
"UpperDir": "/var/lib/docker/overlay2/8cc101409d56979bc21ca10fbfb120097217eddf7a810fdf2e8f2e3e78d516cb/diff",
"WorkDir": "/var/lib/docker/overlay2/8cc101409d56979bc21ca10fbfb120097217eddf7a810fdf2e8f2e3e78d516cb/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "functional-113333",
"Source": "/var/lib/docker/volumes/functional-113333/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "functional-113333",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8441/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "functional-113333",
"name.minikube.sigs.k8s.io": "functional-113333",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "a211ba94c8850961796fb0b95cdec4d53ee08039011b058eabdfa970d2029d85",
"SandboxKey": "/var/run/docker/netns/a211ba94c885",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33148"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33149"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33152"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33150"
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33151"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"functional-113333": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "b6:42:67:f3:c0:76",
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "90b72701a62f4e5c7a3409fa4bb2ab5e9e99c71d1e536f1b56e4a3c618dc646d",
"EndpointID": "049ef9c51ec99d3d8642aca3df3c234d511cfe97279244292d3363d54e2d7fca",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"functional-113333",
"0e969f65a5f5"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p functional-113333 -n functional-113333
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run: out/minikube-linux-amd64 -p functional-113333 logs -n 25
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs:
-- stdout --
==> Audit <==
┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ ssh │ functional-113333 ssh stat /mount-9p/created-by-pod │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
│ ssh │ functional-113333 ssh sudo umount -f /mount-9p │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
│ mount │ -p functional-113333 /tmp/TestFunctionalparallelMountCmdspecific-port3676981704/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ │
│ ssh │ functional-113333 ssh findmnt -T /mount-9p | grep 9p │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ │
│ ssh │ functional-113333 ssh findmnt -T /mount-9p | grep 9p │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
│ ssh │ functional-113333 ssh -- ls -la /mount-9p │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
│ ssh │ functional-113333 ssh sudo umount -f /mount-9p │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ │
│ mount │ -p functional-113333 /tmp/TestFunctionalparallelMountCmdVerifyCleanup715299586/001:/mount2 --alsologtostderr -v=1 │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ │
│ mount │ -p functional-113333 /tmp/TestFunctionalparallelMountCmdVerifyCleanup715299586/001:/mount3 --alsologtostderr -v=1 │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ │
│ ssh │ functional-113333 ssh findmnt -T /mount1 │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ │
│ mount │ -p functional-113333 /tmp/TestFunctionalparallelMountCmdVerifyCleanup715299586/001:/mount1 --alsologtostderr -v=1 │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ │
│ ssh │ functional-113333 ssh findmnt -T /mount1 │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
│ ssh │ functional-113333 ssh findmnt -T /mount2 │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
│ ssh │ functional-113333 ssh findmnt -T /mount3 │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
│ mount │ -p functional-113333 --kill=true │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ │
│ update-context │ functional-113333 update-context --alsologtostderr -v=2 │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
│ update-context │ functional-113333 update-context --alsologtostderr -v=2 │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
│ update-context │ functional-113333 update-context --alsologtostderr -v=2 │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
│ image │ functional-113333 image ls --format short --alsologtostderr │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
│ image │ functional-113333 image ls --format yaml --alsologtostderr │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
│ ssh │ functional-113333 ssh pgrep buildkitd │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ │
│ image │ functional-113333 image build -t localhost/my-image:functional-113333 testdata/build --alsologtostderr │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
│ image │ functional-113333 image ls │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
│ image │ functional-113333 image ls --format json --alsologtostderr │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
│ image │ functional-113333 image ls --format table --alsologtostderr │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/09/29 11:20:04
Running on machine: ubuntu-20-agent-4
Binary: Built with gc go1.24.6 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0929 11:20:04.491921 409081 out.go:360] Setting OutFile to fd 1 ...
I0929 11:20:04.492007 409081 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:20:04.492014 409081 out.go:374] Setting ErrFile to fd 2...
I0929 11:20:04.492018 409081 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:20:04.492320 409081 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-357219/.minikube/bin
I0929 11:20:04.492755 409081 out.go:368] Setting JSON to false
I0929 11:20:04.493767 409081 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3748,"bootTime":1759141056,"procs":254,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0929 11:20:04.493856 409081 start.go:140] virtualization: kvm guest
I0929 11:20:04.495673 409081 out.go:179] * [functional-113333] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
I0929 11:20:04.496907 409081 notify.go:220] Checking for updates...
I0929 11:20:04.496966 409081 out.go:179] - MINIKUBE_LOCATION=21655
I0929 11:20:04.498242 409081 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0929 11:20:04.499707 409081 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/21655-357219/kubeconfig
I0929 11:20:04.501035 409081 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/21655-357219/.minikube
I0929 11:20:04.505457 409081 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I0929 11:20:04.506863 409081 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I0929 11:20:04.509025 409081 config.go:182] Loaded profile config "functional-113333": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0929 11:20:04.509717 409081 driver.go:421] Setting default libvirt URI to qemu:///system
I0929 11:20:04.536233 409081 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
I0929 11:20:04.536391 409081 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0929 11:20:04.596439 409081 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:57 SystemTime:2025-09-29 11:20:04.586118728 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0929 11:20:04.596617 409081 docker.go:318] overlay module found
I0929 11:20:04.598520 409081 out.go:179] * Utilisation du pilote docker basé sur le profil existant
I0929 11:20:04.599774 409081 start.go:304] selected driver: docker
I0929 11:20:04.599789 409081 start.go:924] validating driver "docker" against &{Name:functional-113333 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-113333 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0929 11:20:04.599895 409081 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0929 11:20:04.603063 409081 out.go:203]
W0929 11:20:04.604206 409081 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
I0929 11:20:04.605379 409081 out.go:203]
==> Docker <==
Sep 29 11:20:15 functional-113333 dockerd[6858]: time="2025-09-29T11:20:15.328706044Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Sep 29 11:20:17 functional-113333 dockerd[6858]: 2025/09/29 11:20:17 http2: server: error reading preface from client @: read unix /var/run/docker.sock->@: read: connection reset by peer
Sep 29 11:20:19 functional-113333 dockerd[6858]: time="2025-09-29T11:20:19.343821858Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Sep 29 11:20:24 functional-113333 dockerd[6858]: time="2025-09-29T11:20:24.248275811Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
Sep 29 11:20:24 functional-113333 dockerd[6858]: time="2025-09-29T11:20:24.279664315Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Sep 29 11:20:24 functional-113333 dockerd[6858]: time="2025-09-29T11:20:24.297560529Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Sep 29 11:20:24 functional-113333 dockerd[6858]: time="2025-09-29T11:20:24.327949749Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Sep 29 11:20:38 functional-113333 dockerd[6858]: time="2025-09-29T11:20:38.320400297Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Sep 29 11:20:46 functional-113333 dockerd[6858]: time="2025-09-29T11:20:46.319883738Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Sep 29 11:20:48 functional-113333 dockerd[6858]: time="2025-09-29T11:20:48.245713777Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Sep 29 11:20:48 functional-113333 dockerd[6858]: time="2025-09-29T11:20:48.272860343Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Sep 29 11:20:49 functional-113333 dockerd[6858]: time="2025-09-29T11:20:49.247335940Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
Sep 29 11:20:49 functional-113333 dockerd[6858]: time="2025-09-29T11:20:49.277658815Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Sep 29 11:21:25 functional-113333 dockerd[6858]: time="2025-09-29T11:21:25.325203091Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Sep 29 11:21:29 functional-113333 dockerd[6858]: time="2025-09-29T11:21:29.249257351Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Sep 29 11:21:29 functional-113333 dockerd[6858]: time="2025-09-29T11:21:29.280402159Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Sep 29 11:21:38 functional-113333 dockerd[6858]: time="2025-09-29T11:21:38.248774060Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
Sep 29 11:21:38 functional-113333 dockerd[6858]: time="2025-09-29T11:21:38.278362990Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Sep 29 11:21:40 functional-113333 dockerd[6858]: time="2025-09-29T11:21:40.317234122Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Sep 29 11:22:48 functional-113333 dockerd[6858]: time="2025-09-29T11:22:48.354941240Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Sep 29 11:22:59 functional-113333 dockerd[6858]: time="2025-09-29T11:22:59.250100634Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Sep 29 11:22:59 functional-113333 dockerd[6858]: time="2025-09-29T11:22:59.278209097Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Sep 29 11:23:05 functional-113333 dockerd[6858]: time="2025-09-29T11:23:05.329781423Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
Sep 29 11:23:07 functional-113333 dockerd[6858]: time="2025-09-29T11:23:07.250410392Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
Sep 29 11:23:07 functional-113333 dockerd[6858]: time="2025-09-29T11:23:07.279213587Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
813edc572aee3 gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e 5 minutes ago Exited mount-munger 0 756b234fa6e2a busybox-mount
797ed74fc1800 kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 5 minutes ago Running echo-server 0 69f30c3f27ac7 hello-node-connect-7d85dfc575-pvq4m
f19913170bea1 nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8 5 minutes ago Running nginx 0 74ea6477a50a8 nginx-svc
9233722b13058 kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 5 minutes ago Running echo-server 0 5174dba697c69 hello-node-75c85bcc94-524nr
f228fbf887997 df0860106674d 5 minutes ago Running kube-proxy 3 d14826ecc1e95 kube-proxy-kp4d8
66ddd141ec1f6 52546a367cc9e 5 minutes ago Running coredns 2 0daa4d953b658 coredns-66bc5c9577-ndt25
0c1510903edfc 6e38f40d628db 5 minutes ago Running storage-provisioner 3 e14d4154c78df storage-provisioner
a34f86dc27328 5f1f5298c888d 5 minutes ago Running etcd 2 3ec9b5756cc18 etcd-functional-113333
264a78f9985e9 90550c43ad2bc 5 minutes ago Running kube-apiserver 0 cac40828278ac kube-apiserver-functional-113333
1153b7ac7d169 46169d968e920 5 minutes ago Running kube-scheduler 3 4466a2147b50c kube-scheduler-functional-113333
f40ad3c8f099f a0af72f2ec6d6 5 minutes ago Running kube-controller-manager 2 42f7aadb66137 kube-controller-manager-functional-113333
f92f6d64d6929 46169d968e920 5 minutes ago Exited kube-scheduler 2 ba17dfc161521 kube-scheduler-functional-113333
a13393a00a30d df0860106674d 5 minutes ago Exited kube-proxy 2 871e0c1c685a0 kube-proxy-kp4d8
b3296caa44f98 6e38f40d628db 6 minutes ago Exited storage-provisioner 2 3ae050bca60a4 storage-provisioner
ebb584477fb59 52546a367cc9e 6 minutes ago Exited coredns 1 c858f76b2e6af coredns-66bc5c9577-ndt25
fe534996d3885 a0af72f2ec6d6 6 minutes ago Exited kube-controller-manager 1 26caa1f2477bb kube-controller-manager-functional-113333
d15759c72f024 5f1f5298c888d 6 minutes ago Exited etcd 1 daea5fbf20513 etcd-functional-113333
==> coredns [66ddd141ec1f] <==
maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
.:53
[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
CoreDNS-1.12.1
linux/amd64, go1.24.1, 707c7c1
[INFO] 127.0.0.1:47174 - 22489 "HINFO IN 8566101316675011462.5533812213724835804. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.016422216s
==> coredns [ebb584477fb5] <==
maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
.:53
[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
CoreDNS-1.12.1
linux/amd64, go1.24.1, 707c7c1
[INFO] 127.0.0.1:49493 - 53604 "HINFO IN 1223955324215989705.3505866021153624538. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.425693464s
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
==> describe nodes <==
Name: functional-113333
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=functional-113333
kubernetes.io/os=linux
minikube.k8s.io/commit=e087d081f23c6d1317bb12845422265d8d3490cf
minikube.k8s.io/name=functional-113333
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_09_29T11_17_20_0700
minikube.k8s.io/version=v1.37.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 29 Sep 2025 11:17:17 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: functional-113333
AcquireTime: <unset>
RenewTime: Mon, 29 Sep 2025 11:25:06 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 29 Sep 2025 11:20:30 +0000 Mon, 29 Sep 2025 11:17:16 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 29 Sep 2025 11:20:30 +0000 Mon, 29 Sep 2025 11:17:16 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 29 Sep 2025 11:20:30 +0000 Mon, 29 Sep 2025 11:17:16 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 29 Sep 2025 11:20:30 +0000 Mon, 29 Sep 2025 11:17:18 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.49.2
Hostname: functional-113333
Capacity:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32863456Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32863456Ki
pods: 110
System Info:
Machine ID: b2c1ed2445d24531beaede9409d240bc
System UUID: 0575d937-ba65-482d-bfc6-2fea38fe2d9c
Boot ID: 7892f883-017b-40ec-b18f-d6c900a242a7
Kernel Version: 6.8.0-1040-gcp
OS Image: Ubuntu 22.04.5 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://28.4.0
Kubelet Version: v1.34.0
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (14 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default hello-node-75c85bcc94-524nr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m18s
default hello-node-connect-7d85dfc575-pvq4m 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m16s
default mysql-5bb876957f-7fc8m 600m (7%) 700m (8%) 512Mi (1%) 700Mi (2%) 5m4s
default nginx-svc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m17s
default sp-pod 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m11s
kube-system coredns-66bc5c9577-ndt25 100m (1%) 0 (0%) 70Mi (0%) 170Mi (0%) 7m46s
kube-system etcd-functional-113333 100m (1%) 0 (0%) 100Mi (0%) 0 (0%) 7m52s
kube-system kube-apiserver-functional-113333 250m (3%) 0 (0%) 0 (0%) 0 (0%) 5m40s
kube-system kube-controller-manager-functional-113333 200m (2%) 0 (0%) 0 (0%) 0 (0%) 7m53s
kube-system kube-proxy-kp4d8 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m46s
kube-system kube-scheduler-functional-113333 100m (1%) 0 (0%) 0 (0%) 0 (0%) 7m52s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m46s
kubernetes-dashboard dashboard-metrics-scraper-77bf4d6c4c-vxgjm 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m
kubernetes-dashboard kubernetes-dashboard-855c9754f9-xb9xs 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 1350m (16%) 700m (8%)
memory 682Mi (2%) 870Mi (2%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 7m45s kube-proxy
Normal Starting 5m41s kube-proxy
Normal Starting 6m37s kube-proxy
Normal NodeAllocatableEnforced 7m52s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 7m52s kubelet Node functional-113333 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 7m52s kubelet Node functional-113333 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 7m52s kubelet Node functional-113333 status is now: NodeHasSufficientPID
Normal Starting 7m52s kubelet Starting kubelet.
Normal RegisteredNode 7m47s node-controller Node functional-113333 event: Registered Node functional-113333 in Controller
Normal RegisteredNode 6m34s node-controller Node functional-113333 event: Registered Node functional-113333 in Controller
Normal Starting 5m44s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 5m44s (x8 over 5m44s) kubelet Node functional-113333 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 5m44s (x8 over 5m44s) kubelet Node functional-113333 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 5m44s (x7 over 5m44s) kubelet Node functional-113333 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 5m44s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 5m39s node-controller Node functional-113333 event: Registered Node functional-113333 in Controller
==> dmesg <==
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff be 68 62 72 3f fa 08 06
[ +0.151777] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff 7a d8 70 38 23 e4 08 06
[Sep29 11:14] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff 02 90 0b cb ca ea 08 06
[ +2.956459] IPv4: martian source 10.244.0.1 from 10.244.0.27, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 3e b8 ba d4 3b c3 08 06
[ +0.000574] IPv4: martian source 10.244.0.27 from 10.244.0.3, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 a3 f8 27 02 13 08 06
[Sep29 11:15] IPv4: martian source 10.244.0.1 from 10.244.0.34, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e 03 82 6d ea 7e 08 06
[ +0.000575] IPv4: martian source 10.244.0.34 from 10.244.0.3, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 a3 f8 27 02 13 08 06
[ +0.000489] IPv4: martian source 10.244.0.34 from 10.244.0.7, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 5a d2 63 ea f6 fc 08 06
[ +12.299165] IPv4: martian source 10.244.0.35 from 10.244.0.26, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 90 0b cb ca ea 08 06
[ +0.326039] IPv4: martian source 10.244.0.26 from 10.244.0.3, on dev eth0
[ +0.000009] ll header: 00000000: ff ff ff ff ff ff 12 a3 f8 27 02 13 08 06
[Sep29 11:17] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff 3a bf 42 60 d0 c2 08 06
[Sep29 11:18] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff 52 74 32 c9 0e 09 08 06
[Sep29 11:19] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
[ +0.000016] ll header: 00000000: ff ff ff ff ff ff 7e 54 87 73 ab b0 08 06
==> etcd [a34f86dc2732] <==
{"level":"warn","ts":"2025-09-29T11:19:28.818486Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37126","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-29T11:19:28.832866Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37150","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-29T11:19:28.836407Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37170","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-29T11:19:28.842951Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37188","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-29T11:19:28.848846Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37214","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-29T11:19:28.854888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37234","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-29T11:19:28.861324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37248","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-29T11:19:28.867052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37274","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-29T11:19:28.873767Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37288","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-29T11:19:28.881986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37316","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-29T11:19:28.887740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37328","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-29T11:19:28.893473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37334","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-29T11:19:28.899284Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37346","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-29T11:19:28.905190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37364","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-29T11:19:28.911741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37378","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-29T11:19:28.918130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37390","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-29T11:19:28.924691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37414","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-29T11:19:28.931306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37432","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-29T11:19:28.937510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37458","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-29T11:19:28.943973Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37480","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-29T11:19:28.950640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37490","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-29T11:19:28.962928Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37508","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-29T11:19:28.968730Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37534","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-29T11:19:28.974475Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37550","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-29T11:19:29.025265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37582","server-name":"","error":"EOF"}
==> etcd [d15759c72f02] <==
{"level":"warn","ts":"2025-09-29T11:18:32.787866Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32994","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-29T11:18:32.794501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33028","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-29T11:18:32.800938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33040","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-29T11:18:32.811969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33054","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-29T11:18:32.818018Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33066","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-29T11:18:32.823898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33094","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-09-29T11:18:32.867090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33122","server-name":"","error":"EOF"}
{"level":"info","ts":"2025-09-29T11:19:11.921894Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2025-09-29T11:19:11.921971Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-113333","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
{"level":"error","ts":"2025-09-29T11:19:11.922045Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
{"level":"error","ts":"2025-09-29T11:19:18.923756Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
{"level":"error","ts":"2025-09-29T11:19:18.923901Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"info","ts":"2025-09-29T11:19:18.923935Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
{"level":"info","ts":"2025-09-29T11:19:18.924071Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
{"level":"info","ts":"2025-09-29T11:19:18.924088Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
{"level":"warn","ts":"2025-09-29T11:19:18.924504Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"warn","ts":"2025-09-29T11:19:18.924570Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"error","ts":"2025-09-29T11:19:18.924583Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"warn","ts":"2025-09-29T11:19:18.925137Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
{"level":"warn","ts":"2025-09-29T11:19:18.925162Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
{"level":"error","ts":"2025-09-29T11:19:18.925173Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"info","ts":"2025-09-29T11:19:18.926784Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
{"level":"error","ts":"2025-09-29T11:19:18.926844Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"info","ts":"2025-09-29T11:19:18.926867Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2025-09-29T11:19:18.926893Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-113333","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
==> kernel <==
11:25:11 up 1:07, 0 users, load average: 0.16, 0.63, 1.44
Linux functional-113333 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep 9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.5 LTS"
==> kube-apiserver [264a78f9985e] <==
I0929 11:19:30.214291 1 controller.go:667] quota admission added evaluator for: serviceaccounts
I0929 11:19:30.376714 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0929 11:19:30.893419 1 controller.go:667] quota admission added evaluator for: deployments.apps
I0929 11:19:30.921395 1 controller.go:667] quota admission added evaluator for: daemonsets.apps
I0929 11:19:30.939238 1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0929 11:19:30.945891 1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0929 11:19:32.813315 1 controller.go:667] quota admission added evaluator for: endpoints
I0929 11:19:33.113940 1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0929 11:19:49.080074 1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.98.238.203"}
I0929 11:19:53.703390 1 controller.go:667] quota admission added evaluator for: replicasets.apps
I0929 11:19:53.811677 1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.98.106.207"}
I0929 11:19:54.765604 1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.99.105.116"}
I0929 11:19:55.660461 1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.108.31.211"}
I0929 11:20:07.254585 1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.111.112.162"}
I0929 11:20:11.760329 1 controller.go:667] quota admission added evaluator for: namespaces
I0929 11:20:11.865830 1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.241.54"}
I0929 11:20:11.875766 1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.72.11"}
I0929 11:20:38.847641 1 stats.go:136] "Error getting keys" err="empty key: \"\""
I0929 11:20:48.475305 1 stats.go:136] "Error getting keys" err="empty key: \"\""
I0929 11:22:03.299121 1 stats.go:136] "Error getting keys" err="empty key: \"\""
I0929 11:22:08.579804 1 stats.go:136] "Error getting keys" err="empty key: \"\""
I0929 11:23:11.462665 1 stats.go:136] "Error getting keys" err="empty key: \"\""
I0929 11:23:35.006960 1 stats.go:136] "Error getting keys" err="empty key: \"\""
I0929 11:24:22.064164 1 stats.go:136] "Error getting keys" err="empty key: \"\""
I0929 11:24:58.857235 1 stats.go:136] "Error getting keys" err="empty key: \"\""
==> kube-controller-manager [f40ad3c8f099] <==
I0929 11:19:32.773277 1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
I0929 11:19:32.775521 1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
I0929 11:19:32.777750 1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
I0929 11:19:32.779925 1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
I0929 11:19:32.781116 1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
I0929 11:19:32.783370 1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
I0929 11:19:32.785612 1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
I0929 11:19:32.810084 1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
I0929 11:19:32.810106 1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
I0929 11:19:32.810131 1 shared_informer.go:356] "Caches are synced" controller="TTL"
I0929 11:19:32.810141 1 shared_informer.go:356] "Caches are synced" controller="expand"
I0929 11:19:32.810171 1 shared_informer.go:356] "Caches are synced" controller="attach detach"
I0929 11:19:32.810264 1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
I0929 11:19:32.810284 1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
I0929 11:19:32.810289 1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
I0929 11:19:32.811451 1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
I0929 11:19:32.812669 1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
I0929 11:19:32.815410 1 shared_informer.go:356] "Caches are synced" controller="resource quota"
I0929 11:19:32.825597 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
E0929 11:20:11.807953 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E0929 11:20:11.812019 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E0929 11:20:11.813334 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E0929 11:20:11.816473 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E0929 11:20:11.818178 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E0929 11:20:11.823135 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
==> kube-controller-manager [fe534996d388] <==
I0929 11:18:37.926251 1 shared_informer.go:356] "Caches are synced" controller="endpoint"
I0929 11:18:37.926267 1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
I0929 11:18:37.926311 1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
I0929 11:18:37.926415 1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
I0929 11:18:37.926505 1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
I0929 11:18:37.926633 1 shared_informer.go:356] "Caches are synced" controller="disruption"
I0929 11:18:37.926641 1 shared_informer.go:356] "Caches are synced" controller="deployment"
I0929 11:18:37.928534 1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
I0929 11:18:37.929641 1 shared_informer.go:356] "Caches are synced" controller="GC"
I0929 11:18:37.931843 1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
I0929 11:18:37.931894 1 shared_informer.go:356] "Caches are synced" controller="node"
I0929 11:18:37.932002 1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
I0929 11:18:37.932054 1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
I0929 11:18:37.932061 1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
I0929 11:18:37.932071 1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
I0929 11:18:37.934137 1 shared_informer.go:356] "Caches are synced" controller="cronjob"
I0929 11:18:37.935302 1 shared_informer.go:356] "Caches are synced" controller="taint"
I0929 11:18:37.935409 1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
I0929 11:18:37.935477 1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-113333"
I0929 11:18:37.935514 1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
I0929 11:18:37.935768 1 shared_informer.go:356] "Caches are synced" controller="resource quota"
I0929 11:18:37.937689 1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
I0929 11:18:37.938908 1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
I0929 11:18:37.940987 1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
I0929 11:18:37.958320 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
==> kube-proxy [a13393a00a30] <==
I0929 11:19:24.202240 1 server_linux.go:53] "Using iptables proxy"
I0929 11:19:24.271373 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
E0929 11:19:24.272467 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-113333&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
==> kube-proxy [f228fbf88799] <==
I0929 11:19:30.705538 1 server_linux.go:53] "Using iptables proxy"
I0929 11:19:30.759473 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I0929 11:19:30.859648 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I0929 11:19:30.859681 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
E0929 11:19:30.859762 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I0929 11:19:30.883864 1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I0929 11:19:30.883939 1 server_linux.go:132] "Using iptables Proxier"
I0929 11:19:30.889927 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I0929 11:19:30.890375 1 server.go:527] "Version info" version="v1.34.0"
I0929 11:19:30.890413 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0929 11:19:30.892062 1 config.go:106] "Starting endpoint slice config controller"
I0929 11:19:30.892082 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I0929 11:19:30.892103 1 config.go:200] "Starting service config controller"
I0929 11:19:30.892111 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I0929 11:19:30.892177 1 config.go:403] "Starting serviceCIDR config controller"
I0929 11:19:30.892235 1 config.go:309] "Starting node config controller"
I0929 11:19:30.892257 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I0929 11:19:30.892236 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I0929 11:19:30.992294 1 shared_informer.go:356] "Caches are synced" controller="service config"
I0929 11:19:30.992315 1 shared_informer.go:356] "Caches are synced" controller="node config"
I0929 11:19:30.993042 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
I0929 11:19:30.993055 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
==> kube-scheduler [1153b7ac7d16] <==
I0929 11:19:28.187267 1 serving.go:386] Generated self-signed cert in-memory
W0929 11:19:29.407349 1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0929 11:19:29.407400 1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0929 11:19:29.407413 1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
W0929 11:19:29.407423 1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0929 11:19:29.422399 1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
I0929 11:19:29.422419 1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0929 11:19:29.424140 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0929 11:19:29.424168 1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0929 11:19:29.425081 1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
I0929 11:19:29.425179 1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
I0929 11:19:29.524565 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
==> kube-scheduler [f92f6d64d692] <==
I0929 11:19:24.385645 1 serving.go:386] Generated self-signed cert in-memory
==> kubelet <==
Sep 29 11:23:51 functional-113333 kubelet[9100]: E0929 11:23:51.232038 9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xb9xs" podUID="65959828-b43c-46d9-aaf1-caea5d07f5dd"
Sep 29 11:23:58 functional-113333 kubelet[9100]: E0929 11:23:58.231727 9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vxgjm" podUID="367d1ac4-a750-4f02-9e98-a40f80485812"
Sep 29 11:23:58 functional-113333 kubelet[9100]: E0929 11:23:58.231796 9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-7fc8m" podUID="15138e7a-750d-441a-9416-b3684980644f"
Sep 29 11:24:00 functional-113333 kubelet[9100]: E0929 11:24:00.230031 9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="686185b3-6518-44ab-a785-e5ad567bf76c"
Sep 29 11:24:03 functional-113333 kubelet[9100]: E0929 11:24:03.238934 9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xb9xs" podUID="65959828-b43c-46d9-aaf1-caea5d07f5dd"
Sep 29 11:24:10 functional-113333 kubelet[9100]: E0929 11:24:10.231610 9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vxgjm" podUID="367d1ac4-a750-4f02-9e98-a40f80485812"
Sep 29 11:24:11 functional-113333 kubelet[9100]: E0929 11:24:11.231318 9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-7fc8m" podUID="15138e7a-750d-441a-9416-b3684980644f"
Sep 29 11:24:15 functional-113333 kubelet[9100]: E0929 11:24:15.229732 9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="686185b3-6518-44ab-a785-e5ad567bf76c"
Sep 29 11:24:15 functional-113333 kubelet[9100]: E0929 11:24:15.231702 9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xb9xs" podUID="65959828-b43c-46d9-aaf1-caea5d07f5dd"
Sep 29 11:24:22 functional-113333 kubelet[9100]: E0929 11:24:22.231147 9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-7fc8m" podUID="15138e7a-750d-441a-9416-b3684980644f"
Sep 29 11:24:24 functional-113333 kubelet[9100]: E0929 11:24:24.231637 9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vxgjm" podUID="367d1ac4-a750-4f02-9e98-a40f80485812"
Sep 29 11:24:26 functional-113333 kubelet[9100]: E0929 11:24:26.230175 9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="686185b3-6518-44ab-a785-e5ad567bf76c"
Sep 29 11:24:26 functional-113333 kubelet[9100]: E0929 11:24:26.232053 9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xb9xs" podUID="65959828-b43c-46d9-aaf1-caea5d07f5dd"
Sep 29 11:24:35 functional-113333 kubelet[9100]: E0929 11:24:35.231558 9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-7fc8m" podUID="15138e7a-750d-441a-9416-b3684980644f"
Sep 29 11:24:35 functional-113333 kubelet[9100]: E0929 11:24:35.231652 9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vxgjm" podUID="367d1ac4-a750-4f02-9e98-a40f80485812"
Sep 29 11:24:40 functional-113333 kubelet[9100]: E0929 11:24:40.232147 9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xb9xs" podUID="65959828-b43c-46d9-aaf1-caea5d07f5dd"
Sep 29 11:24:41 functional-113333 kubelet[9100]: E0929 11:24:41.236866 9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="686185b3-6518-44ab-a785-e5ad567bf76c"
Sep 29 11:24:47 functional-113333 kubelet[9100]: E0929 11:24:47.231840 9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-7fc8m" podUID="15138e7a-750d-441a-9416-b3684980644f"
Sep 29 11:24:47 functional-113333 kubelet[9100]: E0929 11:24:47.231960 9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vxgjm" podUID="367d1ac4-a750-4f02-9e98-a40f80485812"
Sep 29 11:24:52 functional-113333 kubelet[9100]: E0929 11:24:52.231046 9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xb9xs" podUID="65959828-b43c-46d9-aaf1-caea5d07f5dd"
Sep 29 11:24:53 functional-113333 kubelet[9100]: E0929 11:24:53.229576 9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="686185b3-6518-44ab-a785-e5ad567bf76c"
Sep 29 11:25:01 functional-113333 kubelet[9100]: E0929 11:25:01.231031 9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vxgjm" podUID="367d1ac4-a750-4f02-9e98-a40f80485812"
Sep 29 11:25:02 functional-113333 kubelet[9100]: E0929 11:25:02.231899 9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-7fc8m" podUID="15138e7a-750d-441a-9416-b3684980644f"
Sep 29 11:25:05 functional-113333 kubelet[9100]: E0929 11:25:05.231678 9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xb9xs" podUID="65959828-b43c-46d9-aaf1-caea5d07f5dd"
Sep 29 11:25:07 functional-113333 kubelet[9100]: E0929 11:25:07.230139 9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="686185b3-6518-44ab-a785-e5ad567bf76c"
==> storage-provisioner [0c1510903edf] <==
W0929 11:24:47.225714 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 11:24:49.229025 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 11:24:49.237147 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 11:24:51.240443 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 11:24:51.245952 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 11:24:53.249580 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 11:24:53.253896 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 11:24:55.257102 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 11:24:55.261991 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 11:24:57.265339 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 11:24:57.270624 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 11:24:59.273759 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 11:24:59.277556 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 11:25:01.280718 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 11:25:01.285889 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 11:25:03.289135 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 11:25:03.293152 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 11:25:05.296601 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 11:25:05.300790 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 11:25:07.303556 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 11:25:07.307164 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 11:25:09.310034 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 11:25:09.313853 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 11:25:11.316823 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 11:25:11.320574 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
==> storage-provisioner [b3296caa44f9] <==
I0929 11:18:45.075990 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0929 11:18:45.082442 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0929 11:18:45.082490 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
W0929 11:18:45.084662 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 11:18:48.539506 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 11:18:52.799812 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 11:18:56.398213 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 11:18:59.451540 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 11:19:02.473739 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 11:19:02.478257 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
I0929 11:19:02.478435 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0929 11:19:02.478502 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bc00fa55-b5d7-4096-ad35-b571280c955a", APIVersion:"v1", ResourceVersion:"556", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-113333_24fadb53-6855-4ec5-aad1-993b9e947488 became leader
I0929 11:19:02.478593 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-113333_24fadb53-6855-4ec5-aad1-993b9e947488!
W0929 11:19:02.480302 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 11:19:02.483444 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
I0929 11:19:02.578842 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-113333_24fadb53-6855-4ec5-aad1-993b9e947488!
W0929 11:19:04.486480 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 11:19:04.490606 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 11:19:06.494256 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 11:19:06.498085 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 11:19:08.501237 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 11:19:08.506582 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 11:19:10.509604 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W0929 11:19:10.513944 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
-- /stdout --
helpers_test.go:262: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-113333 -n functional-113333
helpers_test.go:269: (dbg) Run: kubectl --context functional-113333 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount mysql-5bb876957f-7fc8m sp-pod dashboard-metrics-scraper-77bf4d6c4c-vxgjm kubernetes-dashboard-855c9754f9-xb9xs
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run: kubectl --context functional-113333 describe pod busybox-mount mysql-5bb876957f-7fc8m sp-pod dashboard-metrics-scraper-77bf4d6c4c-vxgjm kubernetes-dashboard-855c9754f9-xb9xs
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-113333 describe pod busybox-mount mysql-5bb876957f-7fc8m sp-pod dashboard-metrics-scraper-77bf4d6c4c-vxgjm kubernetes-dashboard-855c9754f9-xb9xs: exit status 1 (80.491761ms)
-- stdout --
Name: busybox-mount
Namespace: default
Priority: 0
Service Account: default
Node: functional-113333/192.168.49.2
Start Time: Mon, 29 Sep 2025 11:20:05 +0000
Labels: integration-test=busybox-mount
Annotations: <none>
Status: Succeeded
IP: 10.244.0.11
IPs:
IP: 10.244.0.11
Containers:
mount-munger:
Container ID: docker://813edc572aee3fca8ca39332981b0dc962ca018d4ff0c26f83d50d21bf947de7
Image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID: docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
Port: <none>
Host Port: <none>
Command:
/bin/sh
-c
--
Args:
cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
State: Terminated
Reason: Completed
Exit Code: 0
Started: Mon, 29 Sep 2025 11:20:07 +0000
Finished: Mon, 29 Sep 2025 11:20:07 +0000
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/mount-9p from test-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n7jzg (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
test-volume:
Type: HostPath (bare host directory volume)
Path: /mount-9p
HostPathType:
kube-api-access-n7jzg:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m6s default-scheduler Successfully assigned default/busybox-mount to functional-113333
Normal Pulling 5m6s kubelet Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Normal Pulled 5m5s kubelet Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.434s (1.434s including waiting). Image size: 4403845 bytes.
Normal Created 5m5s kubelet Created container: mount-munger
Normal Started 5m5s kubelet Started container mount-munger
Name: mysql-5bb876957f-7fc8m
Namespace: default
Priority: 0
Service Account: default
Node: functional-113333/192.168.49.2
Start Time: Mon, 29 Sep 2025 11:20:07 +0000
Labels: app=mysql
pod-template-hash=5bb876957f
Annotations: <none>
Status: Pending
IP: 10.244.0.12
IPs:
IP: 10.244.0.12
Controlled By: ReplicaSet/mysql-5bb876957f
Containers:
mysql:
Container ID:
Image: docker.io/mysql:5.7
Image ID:
Port: 3306/TCP (mysql)
Host Port: 0/TCP (mysql)
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Limits:
cpu: 700m
memory: 700Mi
Requests:
cpu: 600m
memory: 512Mi
Environment:
MYSQL_ROOT_PASSWORD: password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pwbxp (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-pwbxp:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m5s default-scheduler Successfully assigned default/mysql-5bb876957f-7fc8m to functional-113333
Normal Pulling 2m7s (x5 over 5m5s) kubelet Pulling image "docker.io/mysql:5.7"
Warning Failed 2m7s (x5 over 5m5s) kubelet Failed to pull image "docker.io/mysql:5.7": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning Failed 2m7s (x5 over 5m5s) kubelet Error: ErrImagePull
Warning Failed 74s (x15 over 5m4s) kubelet Error: ImagePullBackOff
Normal BackOff 10s (x20 over 5m4s) kubelet Back-off pulling image "docker.io/mysql:5.7"
Name: sp-pod
Namespace: default
Priority: 0
Service Account: default
Node: functional-113333/192.168.49.2
Start Time: Mon, 29 Sep 2025 11:20:00 +0000
Labels: test=storage-provisioner
Annotations: <none>
Status: Pending
IP: 10.244.0.10
IPs:
IP: 10.244.0.10
Containers:
myfrontend:
Container ID:
Image: docker.io/nginx
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vqmng (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
mypd:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: myclaim
ReadOnly: false
kube-api-access-vqmng:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m12s default-scheduler Successfully assigned default/sp-pod to functional-113333
Warning Failed 5m11s kubelet Failed to pull image "docker.io/nginx": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal Pulling 2m24s (x5 over 5m11s) kubelet Pulling image "docker.io/nginx"
Warning Failed 2m24s (x5 over 5m11s) kubelet Error: ErrImagePull
Warning Failed 2m24s (x4 over 4m57s) kubelet Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal BackOff 5s (x21 over 5m11s) kubelet Back-off pulling image "docker.io/nginx"
Warning Failed 5s (x21 over 5m11s) kubelet Error: ImagePullBackOff
-- /stdout --
** stderr **
Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-vxgjm" not found
Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-xb9xs" not found
** /stderr **
helpers_test.go:287: kubectl --context functional-113333 describe pod busybox-mount mysql-5bb876957f-7fc8m sp-pod dashboard-metrics-scraper-77bf4d6c4c-vxgjm kubernetes-dashboard-855c9754f9-xb9xs: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (301.89s)