=== RUN TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd
=== CONT TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-991175 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-991175 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-991175 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-991175 --alsologtostderr -v=1] stderr:
I1219 02:37:08.812689 15434 out.go:360] Setting OutFile to fd 1 ...
I1219 02:37:08.812816 15434 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:37:08.812824 15434 out.go:374] Setting ErrFile to fd 2...
I1219 02:37:08.812830 15434 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:37:08.813061 15434 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5003/.minikube/bin
I1219 02:37:08.813277 15434 mustload.go:66] Loading cluster: functional-991175
I1219 02:37:08.813622 15434 config.go:182] Loaded profile config "functional-991175": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1219 02:37:08.815300 15434 host.go:66] Checking if "functional-991175" exists ...
I1219 02:37:08.815482 15434 api_server.go:166] Checking apiserver status ...
I1219 02:37:08.815515 15434 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1219 02:37:08.817535 15434 main.go:144] libmachine: domain functional-991175 has defined MAC address 52:54:00:23:e5:1e in network mk-functional-991175
I1219 02:37:08.817855 15434 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:23:e5:1e", ip: ""} in network mk-functional-991175: {Iface:virbr1 ExpiryTime:2025-12-19 03:34:15 +0000 UTC Type:0 Mac:52:54:00:23:e5:1e Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:functional-991175 Clientid:01:52:54:00:23:e5:1e}
I1219 02:37:08.817879 15434 main.go:144] libmachine: domain functional-991175 has defined IP address 192.168.39.176 and MAC address 52:54:00:23:e5:1e in network mk-functional-991175
I1219 02:37:08.818032 15434 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/functional-991175/id_rsa Username:docker}
I1219 02:37:08.918396 15434 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5319/cgroup
W1219 02:37:08.930427 15434 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5319/cgroup: Process exited with status 1
stdout:
stderr:
I1219 02:37:08.930483 15434 ssh_runner.go:195] Run: ls
I1219 02:37:08.936517 15434 api_server.go:253] Checking apiserver healthz at https://192.168.39.176:8441/healthz ...
I1219 02:37:08.941184 15434 api_server.go:279] https://192.168.39.176:8441/healthz returned 200:
ok
W1219 02:37:08.941217 15434 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1219 02:37:08.941354 15434 config.go:182] Loaded profile config "functional-991175": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1219 02:37:08.941369 15434 addons.go:70] Setting dashboard=true in profile "functional-991175"
I1219 02:37:08.941380 15434 addons.go:239] Setting addon dashboard=true in "functional-991175"
I1219 02:37:08.941399 15434 host.go:66] Checking if "functional-991175" exists ...
I1219 02:37:08.942973 15434 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
I1219 02:37:08.942988 15434 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
I1219 02:37:08.945299 15434 main.go:144] libmachine: domain functional-991175 has defined MAC address 52:54:00:23:e5:1e in network mk-functional-991175
I1219 02:37:08.945637 15434 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:23:e5:1e", ip: ""} in network mk-functional-991175: {Iface:virbr1 ExpiryTime:2025-12-19 03:34:15 +0000 UTC Type:0 Mac:52:54:00:23:e5:1e Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:functional-991175 Clientid:01:52:54:00:23:e5:1e}
I1219 02:37:08.945658 15434 main.go:144] libmachine: domain functional-991175 has defined IP address 192.168.39.176 and MAC address 52:54:00:23:e5:1e in network mk-functional-991175
I1219 02:37:08.945802 15434 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/functional-991175/id_rsa Username:docker}
I1219 02:37:09.055093 15434 ssh_runner.go:195] Run: test -f /usr/bin/helm
I1219 02:37:09.059807 15434 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
I1219 02:37:09.063129 15434 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
I1219 02:37:10.336470 15434 ssh_runner.go:235] Completed: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh": (1.273308304s)
I1219 02:37:10.336580 15434 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
I1219 02:37:14.820687 15434 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (4.484058654s)
I1219 02:37:14.820785 15434 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
I1219 02:37:15.649164 15434 addons.go:500] Verifying addon dashboard=true in "functional-991175"
I1219 02:37:15.652790 15434 out.go:179] * Verifying dashboard addon...
I1219 02:37:15.655233 15434 kapi.go:59] client config for functional-991175: &rest.Config{Host:"https://192.168.39.176:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-991175/client.crt", KeyFile:"/home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-991175/client.key", CAFile:"/home/jenkins/minikube-integration/22230-5003/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2863880), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1219 02:37:15.655883 15434 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1219 02:37:15.655910 15434 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1219 02:37:15.655919 15434 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1219 02:37:15.655926 15434 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1219 02:37:15.655931 15434 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1219 02:37:15.656386 15434 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
I1219 02:37:15.674426 15434 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
I1219 02:37:15.674457 15434 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:16.174526 15434 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:16.659913 15434 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:17.160607 15434 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:17.670174 15434 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:18.163739 15434 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:18.660387 15434 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:19.161537 15434 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:19.660801 15434 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:20.169384 15434 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:20.659947 15434 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:21.159769 15434 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:21.660888 15434 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:22.160051 15434 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:22.660618 15434 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:23.162383 15434 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:23.659898 15434 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:24.160156 15434 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:24.659348 15434 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:25.160675 15434 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:25.660285 15434 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:26.160718 15434 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:26.662406 15434 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:27.160782 15434 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:27.660411 15434 kapi.go:107] duration metric: took 12.004026678s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
I1219 02:37:27.661763 15434 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p functional-991175 addons enable metrics-server
I1219 02:37:27.662742 15434 addons.go:202] Writing out "functional-991175" config to set dashboard=true...
W1219 02:37:27.662948 15434 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1219 02:37:27.663346 15434 kapi.go:59] client config for functional-991175: &rest.Config{Host:"https://192.168.39.176:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-991175/client.crt", KeyFile:"/home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-991175/client.key", CAFile:"/home/jenkins/minikube-integration/22230-5003/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2863880), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1219 02:37:27.665381 15434 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard-kong-proxy kubernetes-dashboard 89bba8fe-5a12-4399-93bb-62fba77c45b5 914 0 2025-12-19 02:37:14 +0000 UTC <nil> <nil> map[app.kubernetes.io/instance:kubernetes-dashboard app.kubernetes.io/managed-by:Helm app.kubernetes.io/name:kong app.kubernetes.io/version:3.9 enable-metrics:true helm.sh/chart:kong-2.52.0] map[meta.helm.sh/release-name:kubernetes-dashboard meta.helm.sh/release-namespace:kubernetes-dashboard] [] [] [{helm Update v1 2025-12-19 02:37:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/name":{},"f:app.kubernetes.io/version":{},"f:enable-metrics":{},"f:helm.sh/chart":{}}},"f:spec":{"f:externalTrafficPolicy":{},"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":443,\"protocol\":\"TCP\"}":{".
":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:kong-proxy-tls,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:31505,AppProtocol:nil,},},Selector:map[string]string{app.kubernetes.io/component: app,app.kubernetes.io/instance: kubernetes-dashboard,app.kubernetes.io/name: kong,},ClusterIP:10.96.134.161,Type:NodePort,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:Cluster,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.96.134.161],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
I1219 02:37:27.665529 15434 host.go:66] Checking if "functional-991175" exists ...
I1219 02:37:27.668628 15434 main.go:144] libmachine: domain functional-991175 has defined MAC address 52:54:00:23:e5:1e in network mk-functional-991175
I1219 02:37:27.668975 15434 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:23:e5:1e", ip: ""} in network mk-functional-991175: {Iface:virbr1 ExpiryTime:2025-12-19 03:34:15 +0000 UTC Type:0 Mac:52:54:00:23:e5:1e Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:functional-991175 Clientid:01:52:54:00:23:e5:1e}
I1219 02:37:27.669030 15434 main.go:144] libmachine: domain functional-991175 has defined IP address 192.168.39.176 and MAC address 52:54:00:23:e5:1e in network mk-functional-991175
I1219 02:37:27.669544 15434 kapi.go:59] client config for functional-991175: &rest.Config{Host:"https://192.168.39.176:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-991175/client.crt", KeyFile:"/home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-991175/client.key", CAFile:"/home/jenkins/minikube-integration/22230-5003/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2863880), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1219 02:37:27.676529 15434 warnings.go:110] "Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice"
I1219 02:37:27.679837 15434 warnings.go:110] "Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice"
I1219 02:37:27.688325 15434 warnings.go:110] "Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice"
I1219 02:37:27.692003 15434 warnings.go:110] "Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice"
I1219 02:37:27.873079 15434 warnings.go:110] "Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice"
I1219 02:37:27.932375 15434 out.go:179] * Dashboard Token:
I1219 02:37:27.933631 15434 out.go:203] eyJhbGciOiJSUzI1NiIsImtpZCI6Ik1QSWp4QUJwaGU5bFJPMGNuVWNJZUZGZDVqckx1Y0htYWNzSk1OeHBZMkUifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzY2MTk4MjQ3LCJpYXQiOjE3NjYxMTE4NDcsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiMTdmMzhiNGUtYjcwOC00M2UyLWEzYjgtMDY2ZGFmOWZmZGNhIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiMGY5ZDBhMWItMDBkMy00NzdhLTk3M2ItZjhkOWRmOTIzZWNkIn19LCJuYmYiOjE3NjYxMTE4NDcsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbi11c2VyIn0.0RMrKt2v6YHAb_ZVDM5h-JigTy5Hl8kYqGmoMD4EC4wyMFkAFxZyRA8Ho5k6fudA6ldti_aDZYjGqn02TDZyKb9cQdApIoEIJOr6RFUPWg9fJ0z_ptZZTXDSMEDCsX0iOpz9mQyL5DTg80yynOoXYS2o4RiBxPG1dF4AiNF7u8_vhiFgCu_gN4ANQqvSrN3HyIAbCujtlpAi47mn7JNLAJPaQIgoxCql4Q1fe8iY5cKwRr-xEhT_vGLfLb4cNFZWtdX1_4JVomZnrUHDnb_h2j8bm3V2E_U9Win9ubnoWo_3QBaQr-Hih1EsY6Swr4W48ISBVDhr1Gkz9YFoIIkMeA
I1219 02:37:27.934860 15434 out.go:203] https://192.168.39.176:31505
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p functional-991175 -n functional-991175
helpers_test.go:253: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run: out/minikube-linux-amd64 -p functional-991175 logs -n 25
E1219 02:37:28.692065 8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/addons-925443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p functional-991175 logs -n 25: (1.458053123s)
helpers_test.go:261: TestFunctional/parallel/DashboardCmd logs:
-- stdout --
==> Audit <==
┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ update-context │ functional-991175 update-context --alsologtostderr -v=2 │ functional-991175 │ jenkins │ v1.37.0 │ 19 Dec 25 02:37 UTC │ 19 Dec 25 02:37 UTC │
│ update-context │ functional-991175 update-context --alsologtostderr -v=2 │ functional-991175 │ jenkins │ v1.37.0 │ 19 Dec 25 02:37 UTC │ 19 Dec 25 02:37 UTC │
│ image │ functional-991175 image ls --format short --alsologtostderr │ functional-991175 │ jenkins │ v1.37.0 │ 19 Dec 25 02:37 UTC │ 19 Dec 25 02:37 UTC │
│ image │ functional-991175 image ls --format yaml --alsologtostderr │ functional-991175 │ jenkins │ v1.37.0 │ 19 Dec 25 02:37 UTC │ 19 Dec 25 02:37 UTC │
│ ssh │ functional-991175 ssh pgrep buildkitd │ functional-991175 │ jenkins │ v1.37.0 │ 19 Dec 25 02:37 UTC │ │
│ image │ functional-991175 image build -t localhost/my-image:functional-991175 testdata/build --alsologtostderr │ functional-991175 │ jenkins │ v1.37.0 │ 19 Dec 25 02:37 UTC │ 19 Dec 25 02:37 UTC │
│ ssh │ functional-991175 ssh stat /mount-9p/created-by-test │ functional-991175 │ jenkins │ v1.37.0 │ 19 Dec 25 02:37 UTC │ 19 Dec 25 02:37 UTC │
│ ssh │ functional-991175 ssh stat /mount-9p/created-by-pod │ functional-991175 │ jenkins │ v1.37.0 │ 19 Dec 25 02:37 UTC │ 19 Dec 25 02:37 UTC │
│ ssh │ functional-991175 ssh sudo umount -f /mount-9p │ functional-991175 │ jenkins │ v1.37.0 │ 19 Dec 25 02:37 UTC │ 19 Dec 25 02:37 UTC │
│ ssh │ functional-991175 ssh findmnt -T /mount-9p | grep 9p │ functional-991175 │ jenkins │ v1.37.0 │ 19 Dec 25 02:37 UTC │ │
│ mount │ -p functional-991175 /tmp/TestFunctionalparallelMountCmdspecific-port1680034923/001:/mount-9p --alsologtostderr -v=1 --port 38769 │ functional-991175 │ jenkins │ v1.37.0 │ 19 Dec 25 02:37 UTC │ │
│ ssh │ functional-991175 ssh findmnt -T /mount-9p | grep 9p │ functional-991175 │ jenkins │ v1.37.0 │ 19 Dec 25 02:37 UTC │ 19 Dec 25 02:37 UTC │
│ ssh │ functional-991175 ssh -- ls -la /mount-9p │ functional-991175 │ jenkins │ v1.37.0 │ 19 Dec 25 02:37 UTC │ 19 Dec 25 02:37 UTC │
│ ssh │ functional-991175 ssh sudo umount -f /mount-9p │ functional-991175 │ jenkins │ v1.37.0 │ 19 Dec 25 02:37 UTC │ │
│ mount │ -p functional-991175 /tmp/TestFunctionalparallelMountCmdVerifyCleanup644847876/001:/mount2 --alsologtostderr -v=1 │ functional-991175 │ jenkins │ v1.37.0 │ 19 Dec 25 02:37 UTC │ │
│ mount │ -p functional-991175 /tmp/TestFunctionalparallelMountCmdVerifyCleanup644847876/001:/mount3 --alsologtostderr -v=1 │ functional-991175 │ jenkins │ v1.37.0 │ 19 Dec 25 02:37 UTC │ │
│ ssh │ functional-991175 ssh findmnt -T /mount1 │ functional-991175 │ jenkins │ v1.37.0 │ 19 Dec 25 02:37 UTC │ │
│ mount │ -p functional-991175 /tmp/TestFunctionalparallelMountCmdVerifyCleanup644847876/001:/mount1 --alsologtostderr -v=1 │ functional-991175 │ jenkins │ v1.37.0 │ 19 Dec 25 02:37 UTC │ │
│ ssh │ functional-991175 ssh findmnt -T /mount1 │ functional-991175 │ jenkins │ v1.37.0 │ 19 Dec 25 02:37 UTC │ 19 Dec 25 02:37 UTC │
│ ssh │ functional-991175 ssh findmnt -T /mount2 │ functional-991175 │ jenkins │ v1.37.0 │ 19 Dec 25 02:37 UTC │ 19 Dec 25 02:37 UTC │
│ ssh │ functional-991175 ssh findmnt -T /mount3 │ functional-991175 │ jenkins │ v1.37.0 │ 19 Dec 25 02:37 UTC │ 19 Dec 25 02:37 UTC │
│ mount │ -p functional-991175 --kill=true │ functional-991175 │ jenkins │ v1.37.0 │ 19 Dec 25 02:37 UTC │ │
│ image │ functional-991175 image ls --format json --alsologtostderr │ functional-991175 │ jenkins │ v1.37.0 │ 19 Dec 25 02:37 UTC │ 19 Dec 25 02:37 UTC │
│ image │ functional-991175 image ls --format table --alsologtostderr │ functional-991175 │ jenkins │ v1.37.0 │ 19 Dec 25 02:37 UTC │ 19 Dec 25 02:37 UTC │
│ image │ functional-991175 image ls │ functional-991175 │ jenkins │ v1.37.0 │ 19 Dec 25 02:37 UTC │ 19 Dec 25 02:37 UTC │
└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/12/19 02:37:08
Running on machine: ubuntu-20-agent-5
Binary: Built with gc go1.25.5 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1219 02:37:08.708923 15417 out.go:360] Setting OutFile to fd 1 ...
I1219 02:37:08.709025 15417 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:37:08.709041 15417 out.go:374] Setting ErrFile to fd 2...
I1219 02:37:08.709047 15417 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:37:08.709268 15417 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5003/.minikube/bin
I1219 02:37:08.709676 15417 out.go:368] Setting JSON to false
I1219 02:37:08.710481 15417 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":1168,"bootTime":1766110661,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1219 02:37:08.710534 15417 start.go:143] virtualization: kvm guest
I1219 02:37:08.712425 15417 out.go:179] * [functional-991175] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
I1219 02:37:08.713666 15417 notify.go:221] Checking for updates...
I1219 02:37:08.713691 15417 out.go:179] - MINIKUBE_LOCATION=22230
I1219 02:37:08.714942 15417 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1219 02:37:08.716323 15417 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22230-5003/kubeconfig
I1219 02:37:08.718327 15417 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5003/.minikube
I1219 02:37:08.722180 15417 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I1219 02:37:08.723394 15417 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1219 02:37:08.724843 15417 config.go:182] Loaded profile config "functional-991175": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1219 02:37:08.725298 15417 driver.go:422] Setting default libvirt URI to qemu:///system
I1219 02:37:08.754231 15417 out.go:179] * Using the kvm2 driver based on existing profile
I1219 02:37:08.755256 15417 start.go:309] selected driver: kvm2
I1219 02:37:08.755274 15417 start.go:928] validating driver "kvm2" against &{Name:functional-991175 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.3 ClusterName:functional-991175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1219 02:37:08.755408 15417 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1219 02:37:08.756878 15417 cni.go:84] Creating CNI manager for ""
I1219 02:37:08.756968 15417 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I1219 02:37:08.757049 15417 start.go:353] cluster config:
{Name:functional-991175 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:functional-991175 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1219 02:37:08.758485 15417 out.go:179] * dry-run validation complete!
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
118e50785c6ac 59f642f485d26 2 seconds ago Running kubernetes-dashboard-web 0 412772cade7a5 kubernetes-dashboard-web-5c9f966b98-vfznd kubernetes-dashboard
bcf56a36a47d3 d9cbc9f4053ca 8 seconds ago Running kubernetes-dashboard-metrics-scraper 0 4039f7831cfa1 kubernetes-dashboard-metrics-scraper-7685fd8b77-v9kvv kubernetes-dashboard
95cc9db4f6b90 04da2b0513cd7 15 seconds ago Running myfrontend 0 b20095e6b429e sp-pod default
f9b5cfdb430c6 56cc512116c8f 17 seconds ago Exited mount-munger 0 e22952349e54d busybox-mount default
4c716c69c6bc0 9056ab77afb8e 29 seconds ago Running echo-server 0 102d9d00f939d hello-node-75c85bcc94-f6w8n default
3eb8137aed9dd 9056ab77afb8e 30 seconds ago Running echo-server 0 3d99677466fcb hello-node-connect-7d85dfc575-czl7n default
d5c08603141b5 20d0be4ee4524 32 seconds ago Running mysql 0 0c0b388498b01 mysql-6bcdcbc558-554d8 default
58db843b3841b 6e38f40d628db 57 seconds ago Running storage-provisioner 4 6c4004e62eb0a storage-provisioner kube-system
a5e0c8b8a3fab 36eef8e07bdd6 About a minute ago Running kube-proxy 2 d66adc23ee232 kube-proxy-wdgkq kube-system
66f1404c482dc 52546a367cc9e About a minute ago Running coredns 2 3b91f9a6fcdf1 coredns-66bc5c9577-5qflf kube-system
f648417def1c9 6e38f40d628db About a minute ago Exited storage-provisioner 3 6c4004e62eb0a storage-provisioner kube-system
3a40fa1b46b6b aa27095f56193 About a minute ago Running kube-apiserver 0 477a78fb1310a kube-apiserver-functional-991175 kube-system
af6b518575775 aec12dadf56dd About a minute ago Running kube-scheduler 2 875914cb42f8c kube-scheduler-functional-991175 kube-system
aeed4cc1daccc 5826b25d990d7 About a minute ago Running kube-controller-manager 3 a5c62d5fd27ba kube-controller-manager-functional-991175 kube-system
da476738e5f1b a3e246e9556e9 About a minute ago Running etcd 2 9c3022414033a etcd-functional-991175 kube-system
dc8923e751097 5826b25d990d7 2 minutes ago Exited kube-controller-manager 2 a5c62d5fd27ba kube-controller-manager-functional-991175 kube-system
348a5c2688f20 a3e246e9556e9 2 minutes ago Exited etcd 1 9c3022414033a etcd-functional-991175 kube-system
429dbf7c0e75e 52546a367cc9e 2 minutes ago Exited coredns 1 3b91f9a6fcdf1 coredns-66bc5c9577-5qflf kube-system
263987cd3ab41 36eef8e07bdd6 2 minutes ago Exited kube-proxy 1 d66adc23ee232 kube-proxy-wdgkq kube-system
bf5b0a6cb75fc aec12dadf56dd 2 minutes ago Exited kube-scheduler 1 875914cb42f8c kube-scheduler-functional-991175 kube-system
==> containerd <==
Dec 19 02:37:23 functional-991175 containerd[4437]: time="2025-12-19T02:37:23.360960648Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod9ea23ba6-0bd8-4e5f-90c6-7037d545eb69/d5c08603141b5e06cee23eedf2cc9e3a085d5d16632a1854542035e4b94b4c1e/hugetlb.2MB.events\""
Dec 19 02:37:23 functional-991175 containerd[4437]: time="2025-12-19T02:37:23.362143215Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/pod0238d7b6-85df-4964-a6e6-7fb14714d248/3eb8137aed9ddf5f42db3e19b8d9b731cb62941dcfaf8ae2946ddad9bc1adc96/hugetlb.2MB.events\""
Dec 19 02:37:23 functional-991175 containerd[4437]: time="2025-12-19T02:37:23.363135736Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/pod2dd3a376-a910-4065-a877-d5dd5989104c/4c716c69c6bc05ad11ee26a1ab6be7046e638bd0f866dd3a1e701b2a9530df17/hugetlb.2MB.events\""
Dec 19 02:37:23 functional-991175 containerd[4437]: time="2025-12-19T02:37:23.365475797Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/podc1a46824-5cec-4320-8f94-c8c554b0272c/bcf56a36a47d3428bb03eea9a006006029d10c9879b7becb842c7bb0e1774014/hugetlb.2MB.events\""
Dec 19 02:37:23 functional-991175 containerd[4437]: time="2025-12-19T02:37:23.368626657Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/podd0b934acd85dea9a21ab1e414d833e00/af6b5185757751a4adb5a14512ee3a542a0c9812feb002595b542a3da537532c/hugetlb.2MB.events\""
Dec 19 02:37:23 functional-991175 containerd[4437]: time="2025-12-19T02:37:23.369451884Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/podeb78920fa8288e3e92099be56b0387ab/3a40fa1b46b6bb4b9a1917f03833d4a5537a27249d495edc09375bf6d2e61fc6/hugetlb.2MB.events\""
Dec 19 02:37:23 functional-991175 containerd[4437]: time="2025-12-19T02:37:23.370642636Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod8b50289536da016751fda33002b3c6dd/da476738e5f1b95b1f74582bdf07802870ffd176aa25a3c5c77c0c576c35f679/hugetlb.2MB.events\""
Dec 19 02:37:23 functional-991175 containerd[4437]: time="2025-12-19T02:37:23.371506963Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/podaa7a07822daf4df680cd252b6bdb1bb2/aeed4cc1daccc8314e531e5b310d6e30e12fbee865eb49338db7a0ccecf19759/hugetlb.2MB.events\""
Dec 19 02:37:23 functional-991175 containerd[4437]: time="2025-12-19T02:37:23.372238776Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/podebfcc1c4-e2f8-45ef-abde-a764cd68d374/66f1404c482dc20ddc28bc3ba9a6541a9fa56c30b598ee60e1eb96348aa624d3/hugetlb.2MB.events\""
Dec 19 02:37:23 functional-991175 containerd[4437]: time="2025-12-19T02:37:23.373926021Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/pod09b44541-6422-4929-9467-c65fe5dd3f86/a5e0c8b8a3fabe401a1d0cfad5f633ae19db07a1ada2a523a07729ff6aab773e/hugetlb.2MB.events\""
Dec 19 02:37:23 functional-991175 containerd[4437]: time="2025-12-19T02:37:23.374888223Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/podf4d8f0f0-a770-4556-8fcb-88c02bcdb4a9/95cc9db4f6b90126964eac00353d31d77bd9350acdd63ddd0b504401299d8771/hugetlb.2MB.events\""
Dec 19 02:37:23 functional-991175 containerd[4437]: time="2025-12-19T02:37:23.379655075Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/pod0b827772-7dd9-4175-86bc-0507e1b78055/58db843b3841bb930e38261156b1c8725df9bf507fd7a32a3b854031be81ea26/hugetlb.2MB.events\""
Dec 19 02:37:26 functional-991175 containerd[4437]: time="2025-12-19T02:37:26.580536599Z" level=info msg="ImageCreate event name:\"docker.io/kubernetesui/dashboard-web:1.7.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 19 02:37:26 functional-991175 containerd[4437]: time="2025-12-19T02:37:26.582378355Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard-web:1.7.0: active requests=0, bytes read=62507990"
Dec 19 02:37:26 functional-991175 containerd[4437]: time="2025-12-19T02:37:26.583938206Z" level=info msg="ImageCreate event name:\"sha256:59f642f485d26d479d2dedc7c6139f5ce41939fa22c1152314fefdf3c463aa06\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 19 02:37:26 functional-991175 containerd[4437]: time="2025-12-19T02:37:26.587919694Z" level=info msg="ImageCreate event name:\"docker.io/kubernetesui/dashboard-web@sha256:cc7c31bd2d8470e3590dcb20fe980769b43054b31a5c5c0da606e9add898d85d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 19 02:37:26 functional-991175 containerd[4437]: time="2025-12-19T02:37:26.588871036Z" level=info msg="Pulled image \"docker.io/kubernetesui/dashboard-web:1.7.0\" with image id \"sha256:59f642f485d26d479d2dedc7c6139f5ce41939fa22c1152314fefdf3c463aa06\", repo tag \"docker.io/kubernetesui/dashboard-web:1.7.0\", repo digest \"docker.io/kubernetesui/dashboard-web@sha256:cc7c31bd2d8470e3590dcb20fe980769b43054b31a5c5c0da606e9add898d85d\", size \"62497108\" in 6.697931408s"
Dec 19 02:37:26 functional-991175 containerd[4437]: time="2025-12-19T02:37:26.588916963Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard-web:1.7.0\" returns image reference \"sha256:59f642f485d26d479d2dedc7c6139f5ce41939fa22c1152314fefdf3c463aa06\""
Dec 19 02:37:26 functional-991175 containerd[4437]: time="2025-12-19T02:37:26.591282972Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard-api:1.14.0\""
Dec 19 02:37:26 functional-991175 containerd[4437]: time="2025-12-19T02:37:26.597752944Z" level=info msg="CreateContainer within sandbox \"412772cade7a5d04c80c5d4988ba69d199d63adb0403846d52abab5d4c3f572b\" for container name:\"kubernetes-dashboard-web\""
Dec 19 02:37:26 functional-991175 containerd[4437]: time="2025-12-19T02:37:26.611080862Z" level=info msg="Container 118e50785c6acd1787714e535a959b70c05eacbbab2d1e5e4e63ff209eb1ab36: CDI devices from CRI Config.CDIDevices: []"
Dec 19 02:37:26 functional-991175 containerd[4437]: time="2025-12-19T02:37:26.624870931Z" level=info msg="CreateContainer within sandbox \"412772cade7a5d04c80c5d4988ba69d199d63adb0403846d52abab5d4c3f572b\" for name:\"kubernetes-dashboard-web\" returns container id \"118e50785c6acd1787714e535a959b70c05eacbbab2d1e5e4e63ff209eb1ab36\""
Dec 19 02:37:26 functional-991175 containerd[4437]: time="2025-12-19T02:37:26.625942121Z" level=info msg="StartContainer for \"118e50785c6acd1787714e535a959b70c05eacbbab2d1e5e4e63ff209eb1ab36\""
Dec 19 02:37:26 functional-991175 containerd[4437]: time="2025-12-19T02:37:26.627970258Z" level=info msg="connecting to shim 118e50785c6acd1787714e535a959b70c05eacbbab2d1e5e4e63ff209eb1ab36" address="unix:///run/containerd/s/8d7e6d3b2b29e2bf77a37a37e876f04cbd444e7deb931946284a8d6cfdc4a302" protocol=ttrpc version=3
Dec 19 02:37:26 functional-991175 containerd[4437]: time="2025-12-19T02:37:26.768281782Z" level=info msg="StartContainer for \"118e50785c6acd1787714e535a959b70c05eacbbab2d1e5e4e63ff209eb1ab36\" returns successfully"
==> coredns [429dbf7c0e75ef36df5d65eccdbdbf117c37b5047a36bfd113fbf82e49bd04ce] <==
maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
.:53
[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
CoreDNS-1.12.1
linux/amd64, go1.24.1, 707c7c1
[INFO] 127.0.0.1:53347 - 50564 "HINFO IN 8070156027199287086.8041306130364622304. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.017042702s
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=466": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
==> coredns [66f1404c482dc20ddc28bc3ba9a6541a9fa56c30b598ee60e1eb96348aa624d3] <==
maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
.:53
[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
CoreDNS-1.12.1
linux/amd64, go1.24.1, 707c7c1
[INFO] 127.0.0.1:41979 - 58544 "HINFO IN 5715910185780935871.5693441124122664811. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.016268999s
==> describe nodes <==
Name: functional-991175
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=functional-991175
kubernetes.io/os=linux
minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
minikube.k8s.io/name=functional-991175
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_12_19T02_34_38_0700
minikube.k8s.io/version=v1.37.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Fri, 19 Dec 2025 02:34:34 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: functional-991175
AcquireTime: <unset>
RenewTime: Fri, 19 Dec 2025 02:37:28 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Fri, 19 Dec 2025 02:37:17 +0000 Fri, 19 Dec 2025 02:34:32 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Fri, 19 Dec 2025 02:37:17 +0000 Fri, 19 Dec 2025 02:34:32 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Fri, 19 Dec 2025 02:37:17 +0000 Fri, 19 Dec 2025 02:34:32 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Fri, 19 Dec 2025 02:37:17 +0000 Fri, 19 Dec 2025 02:34:38 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.176
Hostname: functional-991175
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4001784Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4001784Ki
pods: 110
System Info:
Machine ID: 1723782ad1a04ba5acbc6b8bdb9df320
System UUID: 1723782a-d1a0-4ba5-acbc-6b8bdb9df320
Boot ID: aeb9bd68-4db2-45f9-95ee-16fc41838eb9
Kernel Version: 6.6.95
OS Image: Buildroot 2025.02
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://2.2.0
Kubelet Version: v1.34.3
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (16 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default hello-node-75c85bcc94-f6w8n 0 (0%) 0 (0%) 0 (0%) 0 (0%) 42s
default hello-node-connect-7d85dfc575-czl7n 0 (0%) 0 (0%) 0 (0%) 0 (0%) 48s
default mysql-6bcdcbc558-554d8 600m (30%) 700m (35%) 512Mi (13%) 700Mi (17%) 50s
default sp-pod 0 (0%) 0 (0%) 0 (0%) 0 (0%) 17s
kube-system coredns-66bc5c9577-5qflf 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 2m47s
kube-system etcd-functional-991175 100m (5%) 0 (0%) 100Mi (2%) 0 (0%) 2m52s
kube-system kube-apiserver-functional-991175 250m (12%) 0 (0%) 0 (0%) 0 (0%) 72s
kube-system kube-controller-manager-functional-991175 200m (10%) 0 (0%) 0 (0%) 0 (0%) 2m52s
kube-system kube-proxy-wdgkq 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m47s
kube-system kube-scheduler-functional-991175 100m (5%) 0 (0%) 0 (0%) 0 (0%) 2m52s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m45s
kubernetes-dashboard kubernetes-dashboard-api-5f84cf677c-t95d8 100m (5%) 250m (12%) 200Mi (5%) 400Mi (10%) 15s
kubernetes-dashboard kubernetes-dashboard-auth-75547cbd96-t758q 100m (5%) 250m (12%) 200Mi (5%) 400Mi (10%) 15s
kubernetes-dashboard kubernetes-dashboard-kong-9849c64bd-5ldxp 0 (0%) 0 (0%) 0 (0%) 0 (0%) 15s
kubernetes-dashboard kubernetes-dashboard-metrics-scraper-7685fd8b77-v9kvv 100m (5%) 250m (12%) 200Mi (5%) 400Mi (10%) 15s
kubernetes-dashboard kubernetes-dashboard-web-5c9f966b98-vfznd 100m (5%) 250m (12%) 200Mi (5%) 400Mi (10%) 15s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 1750m (87%) 1700m (85%)
memory 1482Mi (37%) 2470Mi (63%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 2m45s kube-proxy
Normal Starting 71s kube-proxy
Normal Starting 2m20s kube-proxy
Normal Starting 2m59s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 2m58s (x8 over 2m58s) kubelet Node functional-991175 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 2m58s (x8 over 2m58s) kubelet Node functional-991175 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2m58s (x7 over 2m58s) kubelet Node functional-991175 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 2m58s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientPID 2m52s kubelet Node functional-991175 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 2m52s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 2m52s kubelet Node functional-991175 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 2m52s kubelet Node functional-991175 status is now: NodeHasNoDiskPressure
Normal Starting 2m52s kubelet Starting kubelet.
Normal NodeReady 2m51s kubelet Node functional-991175 status is now: NodeReady
Normal RegisteredNode 2m48s node-controller Node functional-991175 event: Registered Node functional-991175 in Controller
Normal NodeHasSufficientMemory 2m4s (x8 over 2m4s) kubelet Node functional-991175 status is now: NodeHasSufficientMemory
Normal Starting 2m4s kubelet Starting kubelet.
Normal NodeHasNoDiskPressure 2m4s (x8 over 2m4s) kubelet Node functional-991175 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2m4s (x7 over 2m4s) kubelet Node functional-991175 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 2m4s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 118s node-controller Node functional-991175 event: Registered Node functional-991175 in Controller
Normal Starting 77s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 76s (x8 over 76s) kubelet Node functional-991175 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 76s (x8 over 76s) kubelet Node functional-991175 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 76s (x7 over 76s) kubelet Node functional-991175 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 76s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 70s node-controller Node functional-991175 event: Registered Node functional-991175 in Controller
==> dmesg <==
[ +0.090988] kauditd_printk_skb: 18 callbacks suppressed
[ +0.123275] kauditd_printk_skb: 171 callbacks suppressed
[ +5.652153] kauditd_printk_skb: 18 callbacks suppressed
[ +6.955857] kauditd_printk_skb: 255 callbacks suppressed
[ +0.123245] kauditd_printk_skb: 44 callbacks suppressed
[Dec19 02:35] kauditd_printk_skb: 116 callbacks suppressed
[ +6.585224] kauditd_printk_skb: 60 callbacks suppressed
[ +6.713571] kauditd_printk_skb: 12 callbacks suppressed
[ +0.649255] kauditd_printk_skb: 14 callbacks suppressed
[ +1.860574] kauditd_printk_skb: 20 callbacks suppressed
[ +12.108279] kauditd_printk_skb: 8 callbacks suppressed
[ +0.112528] kauditd_printk_skb: 12 callbacks suppressed
[Dec19 02:36] kauditd_printk_skb: 84 callbacks suppressed
[ +5.205783] kauditd_printk_skb: 47 callbacks suppressed
[ +4.195739] kauditd_printk_skb: 75 callbacks suppressed
[ +12.544167] kauditd_printk_skb: 43 callbacks suppressed
[ +1.736160] kauditd_printk_skb: 53 callbacks suppressed
[ +1.814021] kauditd_printk_skb: 91 callbacks suppressed
[ +6.484366] kauditd_printk_skb: 38 callbacks suppressed
[ +4.784440] kauditd_printk_skb: 74 callbacks suppressed
[Dec19 02:37] kauditd_printk_skb: 53 callbacks suppressed
[ +4.477591] kauditd_printk_skb: 41 callbacks suppressed
[ +0.682477] crun[8106]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
[ +0.270027] kauditd_printk_skb: 180 callbacks suppressed
[ +10.433789] kauditd_printk_skb: 135 callbacks suppressed
==> etcd [348a5c2688f204ad24f7cf5f82189d287519be45d68cfd73cc5ef109ce2d773c] <==
{"level":"warn","ts":"2025-12-19T02:35:27.479418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60580","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T02:35:27.487148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60598","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T02:35:27.499498Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60620","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T02:35:27.504216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60636","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T02:35:27.513617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60658","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T02:35:27.522432Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60664","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T02:35:27.570621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60692","server-name":"","error":"EOF"}
{"level":"info","ts":"2025-12-19T02:36:06.601097Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2025-12-19T02:36:06.601223Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-991175","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.176:2380"],"advertise-client-urls":["https://192.168.39.176:2379"]}
{"level":"error","ts":"2025-12-19T02:36:06.601331Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
{"level":"error","ts":"2025-12-19T02:36:06.603143Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
{"level":"error","ts":"2025-12-19T02:36:06.603231Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"info","ts":"2025-12-19T02:36:06.603258Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f70d523d4475ce3b","current-leader-member-id":"f70d523d4475ce3b"}
{"level":"warn","ts":"2025-12-19T02:36:06.603276Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.176:2379: use of closed network connection"}
{"level":"warn","ts":"2025-12-19T02:36:06.603337Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"warn","ts":"2025-12-19T02:36:06.603356Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"warn","ts":"2025-12-19T02:36:06.603342Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.176:2379: use of closed network connection"}
{"level":"error","ts":"2025-12-19T02:36:06.603394Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.176:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"info","ts":"2025-12-19T02:36:06.603398Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
{"level":"error","ts":"2025-12-19T02:36:06.603382Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"info","ts":"2025-12-19T02:36:06.603412Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
{"level":"info","ts":"2025-12-19T02:36:06.606564Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.176:2380"}
{"level":"error","ts":"2025-12-19T02:36:06.606619Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.176:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"info","ts":"2025-12-19T02:36:06.606638Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.176:2380"}
{"level":"info","ts":"2025-12-19T02:36:06.606644Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-991175","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.176:2380"],"advertise-client-urls":["https://192.168.39.176:2379"]}
==> etcd [da476738e5f1b95b1f74582bdf07802870ffd176aa25a3c5c77c0c576c35f679] <==
{"level":"warn","ts":"2025-12-19T02:36:53.229453Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"196.049564ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-19T02:36:53.229494Z","caller":"traceutil/trace.go:172","msg":"trace[160057841] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:778; }","duration":"196.096933ms","start":"2025-12-19T02:36:53.033391Z","end":"2025-12-19T02:36:53.229488Z","steps":["trace[160057841] 'agreement among raft nodes before linearized reading' (duration: 196.034311ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-19T02:36:53.229624Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"192.725968ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-19T02:36:53.229645Z","caller":"traceutil/trace.go:172","msg":"trace[1513768200] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:778; }","duration":"192.748937ms","start":"2025-12-19T02:36:53.036891Z","end":"2025-12-19T02:36:53.229640Z","steps":["trace[1513768200] 'agreement among raft nodes before linearized reading' (duration: 192.705275ms)"],"step_count":1}
{"level":"info","ts":"2025-12-19T02:36:55.498756Z","caller":"traceutil/trace.go:172","msg":"trace[1291379608] linearizableReadLoop","detail":"{readStateIndex:854; appliedIndex:854; }","duration":"243.800216ms","start":"2025-12-19T02:36:55.254940Z","end":"2025-12-19T02:36:55.498740Z","steps":["trace[1291379608] 'read index received' (duration: 243.79571ms)","trace[1291379608] 'applied index is now lower than readState.Index' (duration: 3.81µs)"],"step_count":2}
{"level":"warn","ts":"2025-12-19T02:36:55.498883Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"243.928985ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-19T02:36:55.498902Z","caller":"traceutil/trace.go:172","msg":"trace[1703508812] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:778; }","duration":"243.981317ms","start":"2025-12-19T02:36:55.254916Z","end":"2025-12-19T02:36:55.498897Z","steps":["trace[1703508812] 'agreement among raft nodes before linearized reading' (duration: 243.897928ms)"],"step_count":1}
{"level":"info","ts":"2025-12-19T02:36:55.499386Z","caller":"traceutil/trace.go:172","msg":"trace[1409033602] transaction","detail":"{read_only:false; response_revision:779; number_of_response:1; }","duration":"261.613207ms","start":"2025-12-19T02:36:55.237764Z","end":"2025-12-19T02:36:55.499377Z","steps":["trace[1409033602] 'process raft request' (duration: 261.437538ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-19T02:36:55.669734Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"141.399397ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-12-19T02:36:55.669787Z","caller":"traceutil/trace.go:172","msg":"trace[390670026] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:779; }","duration":"141.459817ms","start":"2025-12-19T02:36:55.528316Z","end":"2025-12-19T02:36:55.669775Z","steps":["trace[390670026] 'range keys from in-memory index tree' (duration: 141.348936ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-19T02:37:11.013626Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"121.99862ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/mysql-6bcdcbc558-554d8\" limit:1 ","response":"range_response_count:1 size:3594"}
{"level":"info","ts":"2025-12-19T02:37:11.013973Z","caller":"traceutil/trace.go:172","msg":"trace[251922854] range","detail":"{range_begin:/registry/pods/default/mysql-6bcdcbc558-554d8; range_end:; response_count:1; response_revision:828; }","duration":"122.418947ms","start":"2025-12-19T02:37:10.891538Z","end":"2025-12-19T02:37:11.013957Z","steps":["trace[251922854] 'range keys from in-memory index tree' (duration: 121.211806ms)"],"step_count":1}
{"level":"warn","ts":"2025-12-19T02:37:19.832331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57776","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T02:37:19.880237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57800","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T02:37:19.912067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57822","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T02:37:20.007184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57840","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T02:37:20.027873Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57848","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T02:37:20.062407Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57878","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T02:37:20.087574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57886","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T02:37:20.099071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57912","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T02:37:20.117458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57926","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T02:37:20.132927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57962","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T02:37:20.178825Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57968","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T02:37:20.197970Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57996","server-name":"","error":"EOF"}
{"level":"info","ts":"2025-12-19T02:37:26.033317Z","caller":"traceutil/trace.go:172","msg":"trace[486613573] transaction","detail":"{read_only:false; response_revision:1014; number_of_response:1; }","duration":"105.215955ms","start":"2025-12-19T02:37:25.928086Z","end":"2025-12-19T02:37:26.033302Z","steps":["trace[486613573] 'process raft request' (duration: 105.116822ms)"],"step_count":1}
==> kernel <==
02:37:29 up 3 min, 0 users, load average: 2.05, 0.94, 0.37
Linux functional-991175 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Dec 17 12:49:57 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2025.02"
==> kube-apiserver [3a40fa1b46b6bb4b9a1917f03833d4a5537a27249d495edc09375bf6d2e61fc6] <==
I1219 02:37:11.789955 1 handler.go:285] Adding GroupVersion configuration.konghq.com v1 to ResourceManager
I1219 02:37:11.821031 1 handler.go:285] Adding GroupVersion configuration.konghq.com v1 to ResourceManager
I1219 02:37:11.849847 1 handler.go:285] Adding GroupVersion configuration.konghq.com v1alpha1 to ResourceManager
I1219 02:37:11.885430 1 handler.go:285] Adding GroupVersion configuration.konghq.com v1beta1 to ResourceManager
I1219 02:37:11.910926 1 handler.go:285] Adding GroupVersion configuration.konghq.com v1alpha1 to ResourceManager
I1219 02:37:11.935153 1 handler.go:285] Adding GroupVersion configuration.konghq.com v1beta1 to ResourceManager
I1219 02:37:14.266374 1 controller.go:667] quota admission added evaluator for: namespaces
I1219 02:37:14.502190 1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-api" clusterIPs={"IPv4":"10.105.193.171"}
I1219 02:37:14.521545 1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-web" clusterIPs={"IPv4":"10.106.178.141"}
I1219 02:37:14.598414 1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-kong-proxy" clusterIPs={"IPv4":"10.96.134.161"}
I1219 02:37:14.601418 1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.66.74"}
I1219 02:37:14.635788 1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-auth" clusterIPs={"IPv4":"10.102.96.133"}
W1219 02:37:19.823593 1 logging.go:55] [core] [Channel #262 SubChannel #263]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
W1219 02:37:19.879732 1 logging.go:55] [core] [Channel #266 SubChannel #267]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
W1219 02:37:19.910608 1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
W1219 02:37:19.986958 1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
W1219 02:37:20.019109 1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
W1219 02:37:20.062253 1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
W1219 02:37:20.084280 1 logging.go:55] [core] [Channel #286 SubChannel #287]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
W1219 02:37:20.097921 1 logging.go:55] [core] [Channel #290 SubChannel #291]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
W1219 02:37:20.117477 1 logging.go:55] [core] [Channel #294 SubChannel #295]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
W1219 02:37:20.131662 1 logging.go:55] [core] [Channel #298 SubChannel #299]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
W1219 02:37:20.175577 1 logging.go:55] [core] [Channel #302 SubChannel #303]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
W1219 02:37:20.197698 1 logging.go:55] [core] [Channel #306 SubChannel #307]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
E1219 02:37:20.718895 1 conn.go:339] Error on socket receive: read tcp 192.168.39.176:8441->192.168.39.1:46740: use of closed network connection
==> kube-controller-manager [aeed4cc1daccc8314e531e5b310d6e30e12fbee865eb49338db7a0ccecf19759] <==
I1219 02:36:19.777386 1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
I1219 02:36:19.779585 1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
I1219 02:36:19.779669 1 shared_informer.go:356] "Caches are synced" controller="resource quota"
I1219 02:36:19.783938 1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
I1219 02:36:19.791916 1 shared_informer.go:356] "Caches are synced" controller="expand"
I1219 02:36:19.797826 1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
I1219 02:36:19.802878 1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
I1219 02:36:19.809055 1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
I1219 02:36:19.811514 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
I1219 02:36:19.821913 1 shared_informer.go:356] "Caches are synced" controller="disruption"
I1219 02:36:19.825956 1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
I1219 02:36:19.830229 1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
I1219 02:37:19.801066 1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="kongingresses.configuration.konghq.com"
I1219 02:37:19.802233 1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="tcpingresses.configuration.konghq.com"
I1219 02:37:19.802509 1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="kongconsumers.configuration.konghq.com"
I1219 02:37:19.802732 1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="udpingresses.configuration.konghq.com"
I1219 02:37:19.802770 1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingressclassparameterses.configuration.konghq.com"
I1219 02:37:19.802899 1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="kongplugins.configuration.konghq.com"
I1219 02:37:19.803059 1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="kongcustomentities.configuration.konghq.com"
I1219 02:37:19.803281 1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="kongupstreampolicies.configuration.konghq.com"
I1219 02:37:19.803410 1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="kongconsumergroups.configuration.konghq.com"
I1219 02:37:19.803822 1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
I1219 02:37:19.871211 1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
I1219 02:37:21.206139 1 shared_informer.go:356] "Caches are synced" controller="resource quota"
I1219 02:37:21.273325 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
==> kube-controller-manager [dc8923e7510979fd92dafeba69038936e5ec5fedbd8fb9747727a17402df7ab1] <==
I1219 02:35:31.611085 1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
I1219 02:35:31.613839 1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
I1219 02:35:31.615851 1 shared_informer.go:356] "Caches are synced" controller="cronjob"
I1219 02:35:31.616728 1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
I1219 02:35:31.616773 1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
I1219 02:35:31.619355 1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
I1219 02:35:31.619895 1 shared_informer.go:356] "Caches are synced" controller="service account"
I1219 02:35:31.621911 1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
I1219 02:35:31.623126 1 shared_informer.go:356] "Caches are synced" controller="endpoint"
I1219 02:35:31.624677 1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
I1219 02:35:31.624780 1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
I1219 02:35:31.628085 1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
I1219 02:35:31.628342 1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
I1219 02:35:31.628584 1 shared_informer.go:356] "Caches are synced" controller="deployment"
I1219 02:35:31.631100 1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
I1219 02:35:31.632252 1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
I1219 02:35:31.637767 1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
I1219 02:35:31.638112 1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
I1219 02:35:31.703589 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
I1219 02:35:31.703626 1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
I1219 02:35:31.703632 1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
I1219 02:35:31.716958 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
I1219 02:35:33.220275 1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
E1219 02:36:01.610613 1 resource_quota_controller.go:446] "Unhandled Error" err="failed to discover resources: Get \"https://192.168.39.176:8441/api\": dial tcp 192.168.39.176:8441: connect: connection refused" logger="UnhandledError"
I1219 02:36:01.718371 1 garbagecollector.go:789] "failed to discover preferred resources" logger="garbage-collector-controller" error="Get \"https://192.168.39.176:8441/api\": dial tcp 192.168.39.176:8441: connect: connection refused"
==> kube-proxy [263987cd3ab41d50eaeffbc947f6d9b9a461a3041001bf0430df62ae1cca1aec] <==
I1219 02:35:08.159392 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I1219 02:35:08.261027 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1219 02:35:08.261087 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.176"]
E1219 02:35:08.261353 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1219 02:35:08.351709 1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Perhaps ip6tables or your kernel needs to be upgraded.
>
I1219 02:35:08.351957 1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I1219 02:35:08.352325 1 server_linux.go:132] "Using iptables Proxier"
I1219 02:35:08.367232 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1219 02:35:08.367783 1 server.go:527] "Version info" version="v1.34.3"
I1219 02:35:08.367798 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1219 02:35:08.369122 1 config.go:106] "Starting endpoint slice config controller"
I1219 02:35:08.369306 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1219 02:35:08.370171 1 config.go:403] "Starting serviceCIDR config controller"
I1219 02:35:08.370277 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1219 02:35:08.371030 1 config.go:309] "Starting node config controller"
I1219 02:35:08.371121 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1219 02:35:08.371177 1 shared_informer.go:356] "Caches are synced" controller="node config"
I1219 02:35:08.371502 1 config.go:200] "Starting service config controller"
I1219 02:35:08.371593 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1219 02:35:08.469616 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
I1219 02:35:08.470601 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
I1219 02:35:08.471813 1 shared_informer.go:356] "Caches are synced" controller="service config"
==> kube-proxy [a5e0c8b8a3fabe401a1d0cfad5f633ae19db07a1ada2a523a07729ff6aab773e] <==
I1219 02:36:17.715142 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I1219 02:36:17.815413 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1219 02:36:17.815605 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.176"]
E1219 02:36:17.815755 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1219 02:36:17.851627 1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Perhaps ip6tables or your kernel needs to be upgraded.
>
I1219 02:36:17.851694 1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I1219 02:36:17.851715 1 server_linux.go:132] "Using iptables Proxier"
I1219 02:36:17.861041 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1219 02:36:17.861382 1 server.go:527] "Version info" version="v1.34.3"
I1219 02:36:17.861571 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1219 02:36:17.866200 1 config.go:200] "Starting service config controller"
I1219 02:36:17.866212 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1219 02:36:17.866229 1 config.go:106] "Starting endpoint slice config controller"
I1219 02:36:17.866233 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1219 02:36:17.866242 1 config.go:403] "Starting serviceCIDR config controller"
I1219 02:36:17.866245 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1219 02:36:17.866871 1 config.go:309] "Starting node config controller"
I1219 02:36:17.866887 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1219 02:36:17.866894 1 shared_informer.go:356] "Caches are synced" controller="node config"
I1219 02:36:17.967363 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
I1219 02:36:17.967390 1 shared_informer.go:356] "Caches are synced" controller="service config"
I1219 02:36:17.967424 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
==> kube-scheduler [af6b5185757751a4adb5a14512ee3a542a0c9812feb002595b542a3da537532c] <==
I1219 02:36:15.556827 1 serving.go:386] Generated self-signed cert in-memory
I1219 02:36:16.395890 1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
I1219 02:36:16.396065 1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1219 02:36:16.401937 1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
I1219 02:36:16.402215 1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
I1219 02:36:16.402184 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1219 02:36:16.402472 1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1219 02:36:16.403097 1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
I1219 02:36:16.403383 1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
I1219 02:36:16.402196 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I1219 02:36:16.410123 1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I1219 02:36:16.504637 1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
I1219 02:36:16.505172 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1219 02:36:16.512856 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
==> kube-scheduler [bf5b0a6cb75fcd042dcd1db080ba4304d922c1440b7a984bb5f23505f353aea9] <==
E1219 02:35:28.268687 1 reflector.go:205] "Failed to watch" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1219 02:35:28.269148 1 reflector.go:205] "Failed to watch" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
E1219 02:35:28.270188 1 reflector.go:205] "Failed to watch" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1219 02:35:28.270356 1 reflector.go:205] "Failed to watch" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
E1219 02:35:28.270458 1 reflector.go:205] "Failed to watch" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1219 02:35:28.270626 1 reflector.go:205] "Failed to watch" err="statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E1219 02:35:28.270799 1 reflector.go:205] "Failed to watch" err="csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
E1219 02:35:28.270918 1 reflector.go:205] "Failed to watch" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
E1219 02:35:28.271088 1 reflector.go:205] "Failed to watch" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
E1219 02:35:28.271400 1 reflector.go:205] "Failed to watch" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1219 02:35:28.271519 1 reflector.go:205] "Failed to watch" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
E1219 02:35:28.271631 1 reflector.go:205] "Failed to watch" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1219 02:35:28.273042 1 reflector.go:205] "Failed to watch" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
E1219 02:35:28.273275 1 reflector.go:205] "Failed to watch" err="replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E1219 02:35:28.273464 1 reflector.go:205] "Failed to watch" err="namespaces is forbidden: User \"system:kube-scheduler\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1219 02:35:28.273672 1 reflector.go:205] "Failed to watch" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
E1219 02:35:28.273954 1 reflector.go:205] "Failed to watch" err="persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
I1219 02:36:11.789586 1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
I1219 02:36:11.789881 1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
I1219 02:36:11.789901 1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1219 02:36:11.790015 1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I1219 02:36:11.789594 1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
I1219 02:36:11.790417 1 server.go:263] "[graceful-termination] secure server has stopped listening"
I1219 02:36:11.790439 1 server.go:265] "[graceful-termination] secure server is exiting"
E1219 02:36:11.790643 1 run.go:72] "command failed" err="finished without leader elect"
==> kubelet <==
Dec 19 02:37:13 functional-991175 kubelet[5156]: I1219 02:37:13.862726 5156 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vf2g6\" (UniqueName: \"kubernetes.io/projected/2342ec87-98cc-47ed-9a07-a529f4e36993-kube-api-access-vf2g6\") pod \"2342ec87-98cc-47ed-9a07-a529f4e36993\" (UID: \"2342ec87-98cc-47ed-9a07-a529f4e36993\") "
Dec 19 02:37:13 functional-991175 kubelet[5156]: I1219 02:37:13.862783 5156 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/2342ec87-98cc-47ed-9a07-a529f4e36993-test-volume\") pod \"2342ec87-98cc-47ed-9a07-a529f4e36993\" (UID: \"2342ec87-98cc-47ed-9a07-a529f4e36993\") "
Dec 19 02:37:13 functional-991175 kubelet[5156]: I1219 02:37:13.862863 5156 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2342ec87-98cc-47ed-9a07-a529f4e36993-test-volume" (OuterVolumeSpecName: "test-volume") pod "2342ec87-98cc-47ed-9a07-a529f4e36993" (UID: "2342ec87-98cc-47ed-9a07-a529f4e36993"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
Dec 19 02:37:13 functional-991175 kubelet[5156]: I1219 02:37:13.865285 5156 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2342ec87-98cc-47ed-9a07-a529f4e36993-kube-api-access-vf2g6" (OuterVolumeSpecName: "kube-api-access-vf2g6") pod "2342ec87-98cc-47ed-9a07-a529f4e36993" (UID: "2342ec87-98cc-47ed-9a07-a529f4e36993"). InnerVolumeSpecName "kube-api-access-vf2g6". PluginName "kubernetes.io/projected", VolumeGIDValue ""
Dec 19 02:37:13 functional-991175 kubelet[5156]: I1219 02:37:13.963160 5156 reconciler_common.go:299] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/2342ec87-98cc-47ed-9a07-a529f4e36993-test-volume\") on node \"functional-991175\" DevicePath \"\""
Dec 19 02:37:13 functional-991175 kubelet[5156]: I1219 02:37:13.963191 5156 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vf2g6\" (UniqueName: \"kubernetes.io/projected/2342ec87-98cc-47ed-9a07-a529f4e36993-kube-api-access-vf2g6\") on node \"functional-991175\" DevicePath \"\""
Dec 19 02:37:14 functional-991175 kubelet[5156]: I1219 02:37:14.479842 5156 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e22952349e54dd6c4f141857a2c1d2c754e5cda38e99fe2fb137dd96a9d3da9d"
Dec 19 02:37:14 functional-991175 kubelet[5156]: I1219 02:37:14.852866 5156 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=2.852846891 podStartE2EDuration="2.852846891s" podCreationTimestamp="2025-12-19 02:37:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-19 02:37:14.56311055 +0000 UTC m=+61.690211950" watchObservedRunningTime="2025-12-19 02:37:14.852846891 +0000 UTC m=+61.979948290"
Dec 19 02:37:14 functional-991175 kubelet[5156]: I1219 02:37:14.975894 5156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c1a46824-5cec-4320-8f94-c8c554b0272c-tmp-volume\") pod \"kubernetes-dashboard-metrics-scraper-7685fd8b77-v9kvv\" (UID: \"c1a46824-5cec-4320-8f94-c8c554b0272c\") " pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-7685fd8b77-v9kvv"
Dec 19 02:37:14 functional-991175 kubelet[5156]: I1219 02:37:14.975943 5156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndhqb\" (UniqueName: \"kubernetes.io/projected/c1a46824-5cec-4320-8f94-c8c554b0272c-kube-api-access-ndhqb\") pod \"kubernetes-dashboard-metrics-scraper-7685fd8b77-v9kvv\" (UID: \"c1a46824-5cec-4320-8f94-c8c554b0272c\") " pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-7685fd8b77-v9kvv"
Dec 19 02:37:15 functional-991175 kubelet[5156]: E1219 02:37:15.005527 5156 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kong-dbless-config\" is forbidden: User \"system:node:functional-991175\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'functional-991175' and this object" logger="UnhandledError" reflector="object-\"kubernetes-dashboard\"/\"kong-dbless-config\"" type="*v1.ConfigMap"
Dec 19 02:37:15 functional-991175 kubelet[5156]: I1219 02:37:15.081810 5156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubernetes-dashboard-kong-prefix-dir\" (UniqueName: \"kubernetes.io/empty-dir/ba8205ec-962f-4440-ab21-0ede40482a03-kubernetes-dashboard-kong-prefix-dir\") pod \"kubernetes-dashboard-kong-9849c64bd-5ldxp\" (UID: \"ba8205ec-962f-4440-ab21-0ede40482a03\") " pod="kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-5ldxp"
Dec 19 02:37:15 functional-991175 kubelet[5156]: I1219 02:37:15.081856 5156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28rt8\" (UniqueName: \"kubernetes.io/projected/f65681eb-1bb2-4962-8d33-6bea103f525d-kube-api-access-28rt8\") pod \"kubernetes-dashboard-web-5c9f966b98-vfznd\" (UID: \"f65681eb-1bb2-4962-8d33-6bea103f525d\") " pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-vfznd"
Dec 19 02:37:15 functional-991175 kubelet[5156]: I1219 02:37:15.082049 5156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f65681eb-1bb2-4962-8d33-6bea103f525d-tmp-volume\") pod \"kubernetes-dashboard-web-5c9f966b98-vfznd\" (UID: \"f65681eb-1bb2-4962-8d33-6bea103f525d\") " pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-vfznd"
Dec 19 02:37:15 functional-991175 kubelet[5156]: I1219 02:37:15.082076 5156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kong-custom-dbless-config-volume\" (UniqueName: \"kubernetes.io/configmap/ba8205ec-962f-4440-ab21-0ede40482a03-kong-custom-dbless-config-volume\") pod \"kubernetes-dashboard-kong-9849c64bd-5ldxp\" (UID: \"ba8205ec-962f-4440-ab21-0ede40482a03\") " pod="kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-5ldxp"
Dec 19 02:37:15 functional-991175 kubelet[5156]: I1219 02:37:15.082112 5156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubernetes-dashboard-kong-tmp\" (UniqueName: \"kubernetes.io/empty-dir/ba8205ec-962f-4440-ab21-0ede40482a03-kubernetes-dashboard-kong-tmp\") pod \"kubernetes-dashboard-kong-9849c64bd-5ldxp\" (UID: \"ba8205ec-962f-4440-ab21-0ede40482a03\") " pod="kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-5ldxp"
Dec 19 02:37:15 functional-991175 kubelet[5156]: I1219 02:37:15.082127 5156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/91d01929-6493-44be-bf0c-91123bab0b32-tmp-volume\") pod \"kubernetes-dashboard-auth-75547cbd96-t758q\" (UID: \"91d01929-6493-44be-bf0c-91123bab0b32\") " pod="kubernetes-dashboard/kubernetes-dashboard-auth-75547cbd96-t758q"
Dec 19 02:37:15 functional-991175 kubelet[5156]: I1219 02:37:15.082147 5156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gswrm\" (UniqueName: \"kubernetes.io/projected/91d01929-6493-44be-bf0c-91123bab0b32-kube-api-access-gswrm\") pod \"kubernetes-dashboard-auth-75547cbd96-t758q\" (UID: \"91d01929-6493-44be-bf0c-91123bab0b32\") " pod="kubernetes-dashboard/kubernetes-dashboard-auth-75547cbd96-t758q"
Dec 19 02:37:15 functional-991175 kubelet[5156]: I1219 02:37:15.185212 5156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2mlj\" (UniqueName: \"kubernetes.io/projected/d35933f0-20c6-4a7d-a25b-9a4e2fe54c23-kube-api-access-x2mlj\") pod \"kubernetes-dashboard-api-5f84cf677c-t95d8\" (UID: \"d35933f0-20c6-4a7d-a25b-9a4e2fe54c23\") " pod="kubernetes-dashboard/kubernetes-dashboard-api-5f84cf677c-t95d8"
Dec 19 02:37:15 functional-991175 kubelet[5156]: I1219 02:37:15.185298 5156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d35933f0-20c6-4a7d-a25b-9a4e2fe54c23-tmp-volume\") pod \"kubernetes-dashboard-api-5f84cf677c-t95d8\" (UID: \"d35933f0-20c6-4a7d-a25b-9a4e2fe54c23\") " pod="kubernetes-dashboard/kubernetes-dashboard-api-5f84cf677c-t95d8"
Dec 19 02:37:19 functional-991175 kubelet[5156]: I1219 02:37:19.890914 5156 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"2","ephemeral-storage":"17734596Ki","hugepages-2Mi":"0","memory":"4001784Ki","pods":"110"}
Dec 19 02:37:19 functional-991175 kubelet[5156]: I1219 02:37:19.891056 5156 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"2","ephemeral-storage":"17734596Ki","hugepages-2Mi":"0","memory":"4001784Ki","pods":"110"}
Dec 19 02:37:26 functional-991175 kubelet[5156]: I1219 02:37:26.590851 5156 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"2","ephemeral-storage":"17734596Ki","hugepages-2Mi":"0","memory":"4001784Ki","pods":"110"}
Dec 19 02:37:26 functional-991175 kubelet[5156]: I1219 02:37:26.590933 5156 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"2","ephemeral-storage":"17734596Ki","hugepages-2Mi":"0","memory":"4001784Ki","pods":"110"}
Dec 19 02:37:27 functional-991175 kubelet[5156]: I1219 02:37:27.597466 5156 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-7685fd8b77-v9kvv" podStartSLOduration=9.765023436 podStartE2EDuration="13.597448488s" podCreationTimestamp="2025-12-19 02:37:14 +0000 UTC" firstStartedPulling="2025-12-19 02:37:16.058269483 +0000 UTC m=+63.185370863" lastFinishedPulling="2025-12-19 02:37:19.890694518 +0000 UTC m=+67.017795915" observedRunningTime="2025-12-19 02:37:20.565630222 +0000 UTC m=+67.692731622" watchObservedRunningTime="2025-12-19 02:37:27.597448488 +0000 UTC m=+74.724549889"
==> kubernetes-dashboard [118e50785c6acd1787714e535a959b70c05eacbbab2d1e5e4e63ff209eb1ab36] <==
I1219 02:37:26.846447 1 main.go:37] "Starting Kubernetes Dashboard Web" version="1.7.0"
I1219 02:37:26.846547 1 init.go:48] Using in-cluster config
I1219 02:37:26.847078 1 main.go:57] "Listening and serving insecurely on" address="0.0.0.0:8000"
==> kubernetes-dashboard [bcf56a36a47d3428bb03eea9a006006029d10c9879b7becb842c7bb0e1774014] <==
I1219 02:37:20.188892 1 main.go:43] "Starting Metrics Scraper" version="1.2.2"
W1219 02:37:20.189387 1 client_config.go:667] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I1219 02:37:20.190280 1 main.go:51] Kubernetes host: https://10.96.0.1:443
I1219 02:37:20.190308 1 main.go:52] Namespace(s): []
==> storage-provisioner [58db843b3841bb930e38261156b1c8725df9bf507fd7a32a3b854031be81ea26] <==
W1219 02:37:03.584728 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:37:05.589856 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:37:05.601703 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:37:07.607918 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:37:07.626447 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:37:09.639867 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:37:09.655444 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:37:11.695164 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:37:11.717284 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:37:13.738885 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:37:13.754059 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:37:15.764402 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:37:15.778279 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:37:17.789702 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:37:17.798346 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:37:19.805724 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:37:19.840142 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:37:21.844182 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:37:21.855458 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:37:23.858632 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:37:23.921373 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:37:25.925485 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:37:26.036194 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:37:28.041764 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:37:28.047435 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
==> storage-provisioner [f648417def1c9eeda96f53ec693199daa03bf12ee4c0496af5cca782a2a12d59] <==
I1219 02:36:17.567067 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F1219 02:36:17.576771 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
-- /stdout --
helpers_test.go:263: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-991175 -n functional-991175
helpers_test.go:270: (dbg) Run: kubectl --context functional-991175 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: busybox-mount kubernetes-dashboard-api-5f84cf677c-t95d8 kubernetes-dashboard-auth-75547cbd96-t758q kubernetes-dashboard-kong-9849c64bd-5ldxp
helpers_test.go:283: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:286: (dbg) Run: kubectl --context functional-991175 describe pod busybox-mount kubernetes-dashboard-api-5f84cf677c-t95d8 kubernetes-dashboard-auth-75547cbd96-t758q kubernetes-dashboard-kong-9849c64bd-5ldxp
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context functional-991175 describe pod busybox-mount kubernetes-dashboard-api-5f84cf677c-t95d8 kubernetes-dashboard-auth-75547cbd96-t758q kubernetes-dashboard-kong-9849c64bd-5ldxp: exit status 1 (78.085752ms)
-- stdout --
Name: busybox-mount
Namespace: default
Priority: 0
Service Account: default
Node: functional-991175/192.168.39.176
Start Time: Fri, 19 Dec 2025 02:37:05 +0000
Labels: integration-test=busybox-mount
Annotations: <none>
Status: Succeeded
IP: 10.244.0.9
IPs:
IP: 10.244.0.9
Containers:
mount-munger:
Container ID: containerd://f9b5cfdb430c61d58f430163b5939c57c79adc438cf17c0cb8b9375cac63ce46
Image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID: gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
Port: <none>
Host Port: <none>
Command:
/bin/sh
-c
--
Args:
cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
State: Terminated
Reason: Completed
Exit Code: 0
Started: Fri, 19 Dec 2025 02:37:11 +0000
Finished: Fri, 19 Dec 2025 02:37:11 +0000
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/mount-9p from test-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vf2g6 (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
test-volume:
Type: HostPath (bare host directory volume)
Path: /mount-9p
HostPathType:
kube-api-access-vf2g6:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 24s default-scheduler Successfully assigned default/busybox-mount to functional-991175
Normal Pulling 24s kubelet spec.containers{mount-munger}: Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Normal Pulled 19s kubelet spec.containers{mount-munger}: Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 4.768s (4.768s including waiting). Image size: 2395207 bytes.
Normal Created 19s kubelet spec.containers{mount-munger}: Created container: mount-munger
Normal Started 19s kubelet spec.containers{mount-munger}: Started container mount-munger
-- /stdout --
** stderr **
Error from server (NotFound): pods "kubernetes-dashboard-api-5f84cf677c-t95d8" not found
Error from server (NotFound): pods "kubernetes-dashboard-auth-75547cbd96-t758q" not found
Error from server (NotFound): pods "kubernetes-dashboard-kong-9849c64bd-5ldxp" not found
** /stderr **
helpers_test.go:288: kubectl --context functional-991175 describe pod busybox-mount kubernetes-dashboard-api-5f84cf677c-t95d8 kubernetes-dashboard-auth-75547cbd96-t758q kubernetes-dashboard-kong-9849c64bd-5ldxp: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (21.37s)