=== RUN TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd
=== CONT TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-125117 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-125117 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-125117 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-125117 --alsologtostderr -v=1] stderr:
I1219 05:57:01.123220 2047823 out.go:360] Setting OutFile to fd 1 ...
I1219 05:57:01.124468 2047823 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 05:57:01.124487 2047823 out.go:374] Setting ErrFile to fd 2...
I1219 05:57:01.124494 2047823 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 05:57:01.124799 2047823 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-1998525/.minikube/bin
I1219 05:57:01.125094 2047823 mustload.go:66] Loading cluster: functional-125117
I1219 05:57:01.125526 2047823 config.go:182] Loaded profile config "functional-125117": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1219 05:57:01.125977 2047823 cli_runner.go:164] Run: docker container inspect functional-125117 --format={{.State.Status}}
I1219 05:57:01.147675 2047823 host.go:66] Checking if "functional-125117" exists ...
I1219 05:57:01.148075 2047823 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1219 05:57:01.306980 2047823 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-19 05:57:01.289346768 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1219 05:57:01.307132 2047823 api_server.go:166] Checking apiserver status ...
I1219 05:57:01.307221 2047823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1219 05:57:01.307290 2047823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-125117
I1219 05:57:01.388815 2047823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34699 SSHKeyPath:/home/jenkins/minikube-integration/22230-1998525/.minikube/machines/functional-125117/id_rsa Username:docker}
I1219 05:57:01.518989 2047823 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/16555/cgroup
I1219 05:57:01.533269 2047823 api_server.go:182] apiserver freezer: "9:freezer:/docker/2221e6b75bc77fdbcbd5081aa99df3d95513803d491effb7dd928cc5dbb9c46d/kubepods/burstable/podf74c08633a183da1956c6260ce388cbc/1b11907af61cdf1d977c26132016a1b5f9944bc16adc124eeabcf1127edd2cdc"
I1219 05:57:01.533359 2047823 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/2221e6b75bc77fdbcbd5081aa99df3d95513803d491effb7dd928cc5dbb9c46d/kubepods/burstable/podf74c08633a183da1956c6260ce388cbc/1b11907af61cdf1d977c26132016a1b5f9944bc16adc124eeabcf1127edd2cdc/freezer.state
I1219 05:57:01.544048 2047823 api_server.go:204] freezer state: "THAWED"
I1219 05:57:01.544092 2047823 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I1219 05:57:01.556133 2047823 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W1219 05:57:01.556180 2047823 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1219 05:57:01.556370 2047823 config.go:182] Loaded profile config "functional-125117": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1219 05:57:01.556382 2047823 addons.go:70] Setting dashboard=true in profile "functional-125117"
I1219 05:57:01.556390 2047823 addons.go:239] Setting addon dashboard=true in "functional-125117"
I1219 05:57:01.556412 2047823 host.go:66] Checking if "functional-125117" exists ...
I1219 05:57:01.556820 2047823 cli_runner.go:164] Run: docker container inspect functional-125117 --format={{.State.Status}}
I1219 05:57:01.633738 2047823 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
I1219 05:57:01.633768 2047823 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
I1219 05:57:01.633864 2047823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-125117
I1219 05:57:01.746853 2047823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34699 SSHKeyPath:/home/jenkins/minikube-integration/22230-1998525/.minikube/machines/functional-125117/id_rsa Username:docker}
I1219 05:57:01.900887 2047823 ssh_runner.go:195] Run: test -f /usr/bin/helm
I1219 05:57:01.905988 2047823 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
I1219 05:57:01.909300 2047823 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
I1219 05:57:03.279655 2047823 ssh_runner.go:235] Completed: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh": (1.370319451s)
I1219 05:57:03.279761 2047823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
I1219 05:57:07.131899 2047823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (3.852099438s)
I1219 05:57:07.131978 2047823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
I1219 05:57:07.896151 2047823 addons.go:500] Verifying addon dashboard=true in "functional-125117"
I1219 05:57:07.896474 2047823 cli_runner.go:164] Run: docker container inspect functional-125117 --format={{.State.Status}}
I1219 05:57:07.930293 2047823 out.go:179] * Verifying dashboard addon...
I1219 05:57:07.934228 2047823 kapi.go:59] client config for functional-125117: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22230-1998525/.minikube/profiles/functional-125117/client.crt", KeyFile:"/home/jenkins/minikube-integration/22230-1998525/.minikube/profiles/functional-125117/client.key", CAFile:"/home/jenkins/minikube-integration/22230-1998525/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ffe230), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1219 05:57:07.934749 2047823 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1219 05:57:07.934762 2047823 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1219 05:57:07.934767 2047823 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1219 05:57:07.934772 2047823 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1219 05:57:07.934780 2047823 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1219 05:57:07.935013 2047823 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
I1219 05:57:07.952545 2047823 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
I1219 05:57:07.952567 2047823 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 05:57:08.440409 2047823 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 05:57:08.939508 2047823 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 05:57:09.438445 2047823 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 05:57:09.939372 2047823 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 05:57:10.439513 2047823 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 05:57:10.938983 2047823 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 05:57:11.439181 2047823 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 05:57:11.939078 2047823 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 05:57:12.438637 2047823 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 05:57:12.942333 2047823 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 05:57:13.439043 2047823 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 05:57:13.938665 2047823 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 05:57:14.437922 2047823 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 05:57:14.938117 2047823 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 05:57:15.438989 2047823 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 05:57:15.948839 2047823 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 05:57:16.438928 2047823 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 05:57:16.945365 2047823 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 05:57:17.439923 2047823 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 05:57:17.939197 2047823 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 05:57:18.439418 2047823 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 05:57:18.938952 2047823 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 05:57:19.438480 2047823 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 05:57:19.939179 2047823 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 05:57:20.438694 2047823 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 05:57:20.938112 2047823 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 05:57:21.440646 2047823 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 05:57:21.938655 2047823 kapi.go:107] duration metric: took 14.00364192s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
I1219 05:57:21.942197 2047823 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p functional-125117 addons enable metrics-server
I1219 05:57:21.945176 2047823 addons.go:202] Writing out "functional-125117" config to set dashboard=true...
W1219 05:57:21.945441 2047823 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1219 05:57:21.945947 2047823 kapi.go:59] client config for functional-125117: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22230-1998525/.minikube/profiles/functional-125117/client.crt", KeyFile:"/home/jenkins/minikube-integration/22230-1998525/.minikube/profiles/functional-125117/client.key", CAFile:"/home/jenkins/minikube-integration/22230-1998525/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ffe230), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1219 05:57:21.948688 2047823 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard-kong-proxy kubernetes-dashboard bf7ddeff-9858-4aa5-85e6-f98ffb60765b 704 0 2025-12-19 05:57:06 +0000 UTC <nil> <nil> map[app.kubernetes.io/instance:kubernetes-dashboard app.kubernetes.io/managed-by:Helm app.kubernetes.io/name:kong app.kubernetes.io/version:3.9 enable-metrics:true helm.sh/chart:kong-2.52.0] map[meta.helm.sh/release-name:kubernetes-dashboard meta.helm.sh/release-namespace:kubernetes-dashboard] [] [] [{helm Update v1 2025-12-19 05:57:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/name":{},"f:app.kubernetes.io/version":{},"f:enable-metrics":{},"f:helm.sh/chart":{}}},"f:spec":{"f:externalTrafficPolicy":{},"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":443,\"protocol\":\"TCP\"}":{".
":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:kong-proxy-tls,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:32145,AppProtocol:nil,},},Selector:map[string]string{app.kubernetes.io/component: app,app.kubernetes.io/instance: kubernetes-dashboard,app.kubernetes.io/name: kong,},ClusterIP:10.98.222.17,Type:NodePort,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:Cluster,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.98.222.17],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
I1219 05:57:21.948938 2047823 host.go:66] Checking if "functional-125117" exists ...
I1219 05:57:21.949222 2047823 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-125117
I1219 05:57:21.978952 2047823 kapi.go:59] client config for functional-125117: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22230-1998525/.minikube/profiles/functional-125117/client.crt", KeyFile:"/home/jenkins/minikube-integration/22230-1998525/.minikube/profiles/functional-125117/client.key", CAFile:"/home/jenkins/minikube-integration/22230-1998525/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ffe230), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1219 05:57:21.986860 2047823 warnings.go:110] "Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice"
I1219 05:57:21.990722 2047823 warnings.go:110] "Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice"
I1219 05:57:21.994388 2047823 warnings.go:110] "Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice"
I1219 05:57:21.998270 2047823 warnings.go:110] "Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice"
I1219 05:57:22.182709 2047823 warnings.go:110] "Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice"
I1219 05:57:22.291030 2047823 out.go:179] * Dashboard Token:
I1219 05:57:22.293919 2047823 out.go:203] eyJhbGciOiJSUzI1NiIsImtpZCI6InlGUWpxMm1XaHJGQkJ1TEE5ci10cWpmMzR3R1lrR093bHFDYWQ1cTRJQ2cifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzY2MjEwMjQyLCJpYXQiOjE3NjYxMjM4NDIsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiYzc2MzExMmYtOTdlMC00MGY0LTg3ZDktNWY1MTdkMzM5ODE5Iiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiMGRkODI3MDgtM2MwZC00Y2NjLTg1YzEtZTNiZDMyZGRlMTA2In19LCJuYmYiOjE3NjYxMjM4NDIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbi11c2VyIn0.ruDSRWIQMH7IV_Y7WKR_D6vmiRjcH9r7nfTAYRpqcKjHDXWo5P84FpyrYNCdMiuPaEF17vorwJLOysRHH7LM18poxYVuAyZUBHH4VHpn1KHxlFGnyuIlybM4PBwYlj7YO2IhlKf0_NJ9F364cGMxZdd0AwnuC_u_bU3peR6lk9UHP8owmsSk113BCXcaSiq-ZGzgnK8EvtUQM7PqZwmM2r0aJT6-nQySJGBE0CEPW2TXRHdLI8zn4d05NoI4cUH9itpvcXCGgEjTdPDcXdqAYHetSlC8ODpCfhJ0xDiKbbzOptcGHMFFr7FCq-mWnqd19IqQoHWnBtl9CjbgMNvxPQ
I1219 05:57:22.297036 2047823 out.go:203] https://192.168.49.2:32145
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run: docker inspect functional-125117
helpers_test.go:244: (dbg) docker inspect functional-125117:
-- stdout --
[
{
"Id": "2221e6b75bc77fdbcbd5081aa99df3d95513803d491effb7dd928cc5dbb9c46d",
"Created": "2025-12-19T05:50:21.175219791Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 2024795,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-12-19T05:50:21.237188659Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:1411dfa4fea1291ce69fcd55acb99f3fbff3e701cee30fdd4f0b2561ac0ef6b0",
"ResolvConfPath": "/var/lib/docker/containers/2221e6b75bc77fdbcbd5081aa99df3d95513803d491effb7dd928cc5dbb9c46d/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/2221e6b75bc77fdbcbd5081aa99df3d95513803d491effb7dd928cc5dbb9c46d/hostname",
"HostsPath": "/var/lib/docker/containers/2221e6b75bc77fdbcbd5081aa99df3d95513803d491effb7dd928cc5dbb9c46d/hosts",
"LogPath": "/var/lib/docker/containers/2221e6b75bc77fdbcbd5081aa99df3d95513803d491effb7dd928cc5dbb9c46d/2221e6b75bc77fdbcbd5081aa99df3d95513803d491effb7dd928cc5dbb9c46d-json.log",
"Name": "/functional-125117",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"functional-125117:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "functional-125117",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 4294967296,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 8589934592,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "2221e6b75bc77fdbcbd5081aa99df3d95513803d491effb7dd928cc5dbb9c46d",
"LowerDir": "/var/lib/docker/overlay2/fec91f26491afaab5012266e19a5baf181492c8d37d8a523a0279dcb6bb7b60e-init/diff:/var/lib/docker/overlay2/00358d85eab3b52f9d297862c5ac97673efd866f7bb8f8781bf0c1744f50abc5/diff",
"MergedDir": "/var/lib/docker/overlay2/fec91f26491afaab5012266e19a5baf181492c8d37d8a523a0279dcb6bb7b60e/merged",
"UpperDir": "/var/lib/docker/overlay2/fec91f26491afaab5012266e19a5baf181492c8d37d8a523a0279dcb6bb7b60e/diff",
"WorkDir": "/var/lib/docker/overlay2/fec91f26491afaab5012266e19a5baf181492c8d37d8a523a0279dcb6bb7b60e/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "volume",
"Name": "functional-125117",
"Source": "/var/lib/docker/volumes/functional-125117/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
},
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
}
],
"Config": {
"Hostname": "functional-125117",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8441/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "functional-125117",
"name.minikube.sigs.k8s.io": "functional-125117",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "ac6be209147ff83ef33be0404c30a3d52883a893b906849415e17071827a7832",
"SandboxKey": "/var/run/docker/netns/ac6be209147f",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "34699"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "34700"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "34703"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "34701"
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "34702"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"functional-125117": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "ea:33:98:b2:6d:ff",
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "6e3ca32369972a40f97c88e40ef8ebd8b3faecd38cd192c4fdab30ed0f8e624a",
"EndpointID": "d8731576aa6adeabdd47ddb63b5b54548f6275ebd89ddc275bf6ea852e3f8f98",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"functional-125117",
"2221e6b75bc7"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:248: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p functional-125117 -n functional-125117
helpers_test.go:253: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run: out/minikube-linux-arm64 -p functional-125117 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-125117 logs -n 25: (1.605495152s)
helpers_test.go:261: TestFunctional/parallel/DashboardCmd logs:
-- stdout --
==> Audit <==
┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ image │ functional-125117 image load --daemon kicbase/echo-server:functional-125117 --alsologtostderr │ functional-125117 │ jenkins │ v1.37.0 │ 19 Dec 25 05:57 UTC │ 19 Dec 25 05:57 UTC │
│ image │ functional-125117 image ls │ functional-125117 │ jenkins │ v1.37.0 │ 19 Dec 25 05:57 UTC │ 19 Dec 25 05:57 UTC │
│ image │ functional-125117 image save kicbase/echo-server:functional-125117 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-125117 │ jenkins │ v1.37.0 │ 19 Dec 25 05:57 UTC │ 19 Dec 25 05:57 UTC │
│ image │ functional-125117 image rm kicbase/echo-server:functional-125117 --alsologtostderr │ functional-125117 │ jenkins │ v1.37.0 │ 19 Dec 25 05:57 UTC │ 19 Dec 25 05:57 UTC │
│ image │ functional-125117 image ls │ functional-125117 │ jenkins │ v1.37.0 │ 19 Dec 25 05:57 UTC │ 19 Dec 25 05:57 UTC │
│ image │ functional-125117 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-125117 │ jenkins │ v1.37.0 │ 19 Dec 25 05:57 UTC │ 19 Dec 25 05:57 UTC │
│ image │ functional-125117 image ls │ functional-125117 │ jenkins │ v1.37.0 │ 19 Dec 25 05:57 UTC │ 19 Dec 25 05:57 UTC │
│ image │ functional-125117 image save --daemon kicbase/echo-server:functional-125117 --alsologtostderr │ functional-125117 │ jenkins │ v1.37.0 │ 19 Dec 25 05:57 UTC │ 19 Dec 25 05:57 UTC │
│ ssh │ functional-125117 ssh sudo cat /etc/ssl/certs/2000386.pem │ functional-125117 │ jenkins │ v1.37.0 │ 19 Dec 25 05:57 UTC │ 19 Dec 25 05:57 UTC │
│ ssh │ functional-125117 ssh sudo cat /usr/share/ca-certificates/2000386.pem │ functional-125117 │ jenkins │ v1.37.0 │ 19 Dec 25 05:57 UTC │ 19 Dec 25 05:57 UTC │
│ ssh │ functional-125117 ssh sudo cat /etc/ssl/certs/51391683.0 │ functional-125117 │ jenkins │ v1.37.0 │ 19 Dec 25 05:57 UTC │ 19 Dec 25 05:57 UTC │
│ ssh │ functional-125117 ssh sudo cat /etc/ssl/certs/20003862.pem │ functional-125117 │ jenkins │ v1.37.0 │ 19 Dec 25 05:57 UTC │ 19 Dec 25 05:57 UTC │
│ ssh │ functional-125117 ssh sudo cat /usr/share/ca-certificates/20003862.pem │ functional-125117 │ jenkins │ v1.37.0 │ 19 Dec 25 05:57 UTC │ 19 Dec 25 05:57 UTC │
│ ssh │ functional-125117 ssh sudo cat /etc/ssl/certs/3ec20f2e.0 │ functional-125117 │ jenkins │ v1.37.0 │ 19 Dec 25 05:57 UTC │ 19 Dec 25 05:57 UTC │
│ ssh │ functional-125117 ssh sudo cat /etc/test/nested/copy/2000386/hosts │ functional-125117 │ jenkins │ v1.37.0 │ 19 Dec 25 05:57 UTC │ 19 Dec 25 05:57 UTC │
│ image │ functional-125117 image ls --format short --alsologtostderr │ functional-125117 │ jenkins │ v1.37.0 │ 19 Dec 25 05:57 UTC │ 19 Dec 25 05:57 UTC │
│ image │ functional-125117 image ls --format yaml --alsologtostderr │ functional-125117 │ jenkins │ v1.37.0 │ 19 Dec 25 05:57 UTC │ 19 Dec 25 05:57 UTC │
│ ssh │ functional-125117 ssh pgrep buildkitd │ functional-125117 │ jenkins │ v1.37.0 │ 19 Dec 25 05:57 UTC │ │
│ image │ functional-125117 image build -t localhost/my-image:functional-125117 testdata/build --alsologtostderr │ functional-125117 │ jenkins │ v1.37.0 │ 19 Dec 25 05:57 UTC │ 19 Dec 25 05:57 UTC │
│ image │ functional-125117 image ls │ functional-125117 │ jenkins │ v1.37.0 │ 19 Dec 25 05:57 UTC │ 19 Dec 25 05:57 UTC │
│ image │ functional-125117 image ls --format json --alsologtostderr │ functional-125117 │ jenkins │ v1.37.0 │ 19 Dec 25 05:57 UTC │ 19 Dec 25 05:57 UTC │
│ image │ functional-125117 image ls --format table --alsologtostderr │ functional-125117 │ jenkins │ v1.37.0 │ 19 Dec 25 05:57 UTC │ 19 Dec 25 05:57 UTC │
│ update-context │ functional-125117 update-context --alsologtostderr -v=2 │ functional-125117 │ jenkins │ v1.37.0 │ 19 Dec 25 05:57 UTC │ 19 Dec 25 05:57 UTC │
│ update-context │ functional-125117 update-context --alsologtostderr -v=2 │ functional-125117 │ jenkins │ v1.37.0 │ 19 Dec 25 05:57 UTC │ 19 Dec 25 05:57 UTC │
│ update-context │ functional-125117 update-context --alsologtostderr -v=2 │ functional-125117 │ jenkins │ v1.37.0 │ 19 Dec 25 05:57 UTC │ 19 Dec 25 05:57 UTC │
└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/12/19 05:57:00
Running on machine: ip-172-31-30-239
Binary: Built with gc go1.25.5 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1219 05:57:00.825630 2047702 out.go:360] Setting OutFile to fd 1 ...
I1219 05:57:00.825786 2047702 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 05:57:00.825814 2047702 out.go:374] Setting ErrFile to fd 2...
I1219 05:57:00.825821 2047702 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 05:57:00.826235 2047702 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-1998525/.minikube/bin
I1219 05:57:00.826818 2047702 out.go:368] Setting JSON to false
I1219 05:57:00.828353 2047702 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":38367,"bootTime":1766085454,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
I1219 05:57:00.828503 2047702 start.go:143] virtualization:
I1219 05:57:00.831878 2047702 out.go:179] * [functional-125117] minikube v1.37.0 on Ubuntu 20.04 (arm64)
I1219 05:57:00.835744 2047702 out.go:179] - MINIKUBE_LOCATION=22230
I1219 05:57:00.835808 2047702 notify.go:221] Checking for updates...
I1219 05:57:00.842106 2047702 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1219 05:57:00.844921 2047702 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22230-1998525/kubeconfig
I1219 05:57:00.847807 2047702 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-1998525/.minikube
I1219 05:57:00.850648 2047702 out.go:179] - MINIKUBE_BIN=out/minikube-linux-arm64
I1219 05:57:00.853481 2047702 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1219 05:57:00.856743 2047702 config.go:182] Loaded profile config "functional-125117": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1219 05:57:00.857481 2047702 driver.go:422] Setting default libvirt URI to qemu:///system
I1219 05:57:00.887079 2047702 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
I1219 05:57:00.887809 2047702 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1219 05:57:00.981174 2047702 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-19 05:57:00.972012169 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1219 05:57:00.981280 2047702 docker.go:319] overlay module found
I1219 05:57:00.984383 2047702 out.go:179] * Using the docker driver based on existing profile
I1219 05:57:00.987320 2047702 start.go:309] selected driver: docker
I1219 05:57:00.987344 2047702 start.go:928] validating driver "docker" against &{Name:functional-125117 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:functional-125117 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1219 05:57:00.987457 2047702 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1219 05:57:00.987563 2047702 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1219 05:57:01.066248 2047702 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-19 05:57:01.056970814 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1219 05:57:01.066699 2047702 cni.go:84] Creating CNI manager for ""
I1219 05:57:01.066777 2047702 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1219 05:57:01.066826 2047702 start.go:353] cluster config:
{Name:functional-125117 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:functional-125117 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1219 05:57:01.071671 2047702 out.go:179] * dry-run validation complete!
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
d56bac3d8d52f 8dcebcf593999 Less than a second ago Running kubernetes-dashboard-auth 0 0225e8cd2103b kubernetes-dashboard-auth-55fb9bbdf8-4b2hl kubernetes-dashboard
29f9704c03ef9 d71ba84d8f0d2 1 second ago Running kubernetes-dashboard-metrics-scraper 0 c275930c27ba8 kubernetes-dashboard-metrics-scraper-7685fd8b77-nrkjq kubernetes-dashboard
985ee70cda830 2c51e8aea46c6 2 seconds ago Running kubernetes-dashboard-web 0 60b553c003886 kubernetes-dashboard-web-5c9f966b98-wllqv kubernetes-dashboard
89d78fe89c73a 85ac4c11285e7 6 seconds ago Running kubernetes-dashboard-api 0 abe31bffd8fd0 kubernetes-dashboard-api-5f6dd64f4-wz2l2 kubernetes-dashboard
f570c4aeb8c9e 2bf86f243d250 6 seconds ago Running proxy 0 89848159ab3b4 kubernetes-dashboard-kong-9849c64bd-527k2 kubernetes-dashboard
fa783f0689484 2bf86f243d250 8 seconds ago Exited clear-stale-pid 0 89848159ab3b4 kubernetes-dashboard-kong-9849c64bd-527k2 kubernetes-dashboard
b1ee8306f19fd 1611cd07b61d5 28 seconds ago Exited mount-munger 0 daa6c1e249943 busybox-mount default
df159e0cc3b97 ce2d2cda2d858 35 seconds ago Running echo-server 0 34b2c94f7f2c5 hello-node-75c85bcc94-b9fdv default
823a7eff1bd3b 962dbbc0e55ec 40 seconds ago Running myfrontend 0 2a2ea65afd873 sp-pod default
6dab832abda4b ce2d2cda2d858 44 seconds ago Running echo-server 0 e5e35ac64f393 hello-node-connect-7d85dfc575-fknbj default
a9b205fafc432 962dbbc0e55ec 51 seconds ago Running nginx 0 c99225971f165 nginx-svc default
2020f80b4026c 138784d87c9c5 About a minute ago Running coredns 0 4aae0884bc674 coredns-66bc5c9577-s2j74 kube-system
1695d1b349c32 c96ee3c174987 About a minute ago Running kindnet-cni 0 97f4d0f17f59a kindnet-xm479 kube-system
be5400fb900ed 138784d87c9c5 About a minute ago Running coredns 0 bfb2d9a2f2bc6 coredns-66bc5c9577-jvgjh kube-system
616e08daf003a 4461daf6b6af8 About a minute ago Running kube-proxy 0 45c23361c46b4 kube-proxy-rglch kube-system
43e1c32c12ad7 ba04bb24b9575 About a minute ago Running storage-provisioner 0 9a73a03433d65 storage-provisioner kube-system
80f5fc9caf3e4 2f2aa21d34d2d About a minute ago Running kube-scheduler 2 812738176915b kube-scheduler-functional-125117 kube-system
b4205502d1709 7ada8ff13e54b About a minute ago Running kube-controller-manager 7 2138fcdd1893b kube-controller-manager-functional-125117 kube-system
1b11907af61cd cf65ae6c8f700 About a minute ago Running kube-apiserver 0 3552ec6bd3315 kube-apiserver-functional-125117 kube-system
e6ec1e0e37ba6 2c5f0dedd21c2 About a minute ago Running etcd 2 2bef43bf72df8 etcd-functional-125117 kube-system
==> containerd <==
Dec 19 05:57:22 functional-125117 containerd[3578]: time="2025-12-19T05:57:22.167396578Z" level=info msg="ImageCreate event name:\"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 19 05:57:22 functional-125117 containerd[3578]: time="2025-12-19T05:57:22.169817863Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2: active requests=0, bytes read=11728895"
Dec 19 05:57:22 functional-125117 containerd[3578]: time="2025-12-19T05:57:22.171669558Z" level=info msg="ImageCreate event name:\"sha256:d71ba84d8f0d22f4859613e1a4cc4636910142305ead3b53d2acaec6b69833da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 19 05:57:22 functional-125117 containerd[3578]: time="2025-12-19T05:57:22.194257564Z" level=info msg="ImageCreate event name:\"docker.io/kubernetesui/dashboard-metrics-scraper@sha256:5154b68252bd601cf85092b6413cb9db224af1ef89cb53009d2070dfccd30775\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 19 05:57:22 functional-125117 containerd[3578]: time="2025-12-19T05:57:22.195886217Z" level=info msg="Pulled image \"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\" with image id \"sha256:d71ba84d8f0d22f4859613e1a4cc4636910142305ead3b53d2acaec6b69833da\", repo tag \"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\", repo digest \"docker.io/kubernetesui/dashboard-metrics-scraper@sha256:5154b68252bd601cf85092b6413cb9db224af1ef89cb53009d2070dfccd30775\", size \"11717950\" in 1.07081575s"
Dec 19 05:57:22 functional-125117 containerd[3578]: time="2025-12-19T05:57:22.196049780Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2\" returns image reference \"sha256:d71ba84d8f0d22f4859613e1a4cc4636910142305ead3b53d2acaec6b69833da\""
Dec 19 05:57:22 functional-125117 containerd[3578]: time="2025-12-19T05:57:22.201130802Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard-auth:1.4.0\""
Dec 19 05:57:22 functional-125117 containerd[3578]: time="2025-12-19T05:57:22.210945248Z" level=info msg="CreateContainer within sandbox \"c275930c27ba8d726b0adf87ed3f69c62de8f8ed2f08671e6337766cd7f05237\" for container name:\"kubernetes-dashboard-metrics-scraper\""
Dec 19 05:57:22 functional-125117 containerd[3578]: time="2025-12-19T05:57:22.233610268Z" level=info msg="Container 29f9704c03ef922b8caa0b1f4f2a3bfcd7eb0990250215e8c3580c91d26110d8: CDI devices from CRI Config.CDIDevices: []"
Dec 19 05:57:22 functional-125117 containerd[3578]: time="2025-12-19T05:57:22.248938438Z" level=info msg="CreateContainer within sandbox \"c275930c27ba8d726b0adf87ed3f69c62de8f8ed2f08671e6337766cd7f05237\" for name:\"kubernetes-dashboard-metrics-scraper\" returns container id \"29f9704c03ef922b8caa0b1f4f2a3bfcd7eb0990250215e8c3580c91d26110d8\""
Dec 19 05:57:22 functional-125117 containerd[3578]: time="2025-12-19T05:57:22.257839193Z" level=info msg="StartContainer for \"29f9704c03ef922b8caa0b1f4f2a3bfcd7eb0990250215e8c3580c91d26110d8\""
Dec 19 05:57:22 functional-125117 containerd[3578]: time="2025-12-19T05:57:22.259296932Z" level=info msg="connecting to shim 29f9704c03ef922b8caa0b1f4f2a3bfcd7eb0990250215e8c3580c91d26110d8" address="unix:///run/containerd/s/0a628d2c35dcad9572d80623ad279f478ea81c9dd11caf27b49cdb71900e28b6" protocol=ttrpc version=3
Dec 19 05:57:22 functional-125117 containerd[3578]: time="2025-12-19T05:57:22.371222425Z" level=info msg="StartContainer for \"29f9704c03ef922b8caa0b1f4f2a3bfcd7eb0990250215e8c3580c91d26110d8\" returns successfully"
Dec 19 05:57:23 functional-125117 containerd[3578]: time="2025-12-19T05:57:23.394601061Z" level=info msg="ImageCreate event name:\"docker.io/kubernetesui/dashboard-auth:1.4.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 19 05:57:23 functional-125117 containerd[3578]: time="2025-12-19T05:57:23.396950640Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard-auth:1.4.0: active requests=0, bytes read=13100028"
Dec 19 05:57:23 functional-125117 containerd[3578]: time="2025-12-19T05:57:23.400038862Z" level=info msg="ImageCreate event name:\"sha256:8dcebcf59399969d7300450ebd4b47f1b8f1ba30453e65ee77c6fb59fb27550c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 19 05:57:23 functional-125117 containerd[3578]: time="2025-12-19T05:57:23.410832166Z" level=info msg="ImageCreate event name:\"docker.io/kubernetesui/dashboard-auth@sha256:53e9917898bf98ff2de91f7f9bdedd3545780eb3ac72158889ae031136e9eeff\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 19 05:57:23 functional-125117 containerd[3578]: time="2025-12-19T05:57:23.412979791Z" level=info msg="Pulled image \"docker.io/kubernetesui/dashboard-auth:1.4.0\" with image id \"sha256:8dcebcf59399969d7300450ebd4b47f1b8f1ba30453e65ee77c6fb59fb27550c\", repo tag \"docker.io/kubernetesui/dashboard-auth:1.4.0\", repo digest \"docker.io/kubernetesui/dashboard-auth@sha256:53e9917898bf98ff2de91f7f9bdedd3545780eb3ac72158889ae031136e9eeff\", size \"13089138\" in 1.211778473s"
Dec 19 05:57:23 functional-125117 containerd[3578]: time="2025-12-19T05:57:23.413188827Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard-auth:1.4.0\" returns image reference \"sha256:8dcebcf59399969d7300450ebd4b47f1b8f1ba30453e65ee77c6fb59fb27550c\""
Dec 19 05:57:23 functional-125117 containerd[3578]: time="2025-12-19T05:57:23.425320886Z" level=info msg="CreateContainer within sandbox \"0225e8cd2103bde4efe107c7649c2155e9339812476b4c1d8cd89c1a9deebecb\" for container name:\"kubernetes-dashboard-auth\""
Dec 19 05:57:23 functional-125117 containerd[3578]: time="2025-12-19T05:57:23.444087156Z" level=info msg="Container d56bac3d8d52fba6a6ff205aef02077569096618ca3b77f900e4c2ac7d44d3b9: CDI devices from CRI Config.CDIDevices: []"
Dec 19 05:57:23 functional-125117 containerd[3578]: time="2025-12-19T05:57:23.460948282Z" level=info msg="CreateContainer within sandbox \"0225e8cd2103bde4efe107c7649c2155e9339812476b4c1d8cd89c1a9deebecb\" for name:\"kubernetes-dashboard-auth\" returns container id \"d56bac3d8d52fba6a6ff205aef02077569096618ca3b77f900e4c2ac7d44d3b9\""
Dec 19 05:57:23 functional-125117 containerd[3578]: time="2025-12-19T05:57:23.464948924Z" level=info msg="StartContainer for \"d56bac3d8d52fba6a6ff205aef02077569096618ca3b77f900e4c2ac7d44d3b9\""
Dec 19 05:57:23 functional-125117 containerd[3578]: time="2025-12-19T05:57:23.468856214Z" level=info msg="connecting to shim d56bac3d8d52fba6a6ff205aef02077569096618ca3b77f900e4c2ac7d44d3b9" address="unix:///run/containerd/s/fc057acea2e6c86341088ee862dc630bf98085dfd46b41b5093ed8336bfefa4b" protocol=ttrpc version=3
Dec 19 05:57:23 functional-125117 containerd[3578]: time="2025-12-19T05:57:23.562789533Z" level=info msg="StartContainer for \"d56bac3d8d52fba6a6ff205aef02077569096618ca3b77f900e4c2ac7d44d3b9\" returns successfully"
==> coredns [2020f80b4026ca383eb5916648bf5f061d3ca523aff9553104589e45655d43d2] <==
maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
.:53
[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
CoreDNS-1.12.1
linux/arm64, go1.24.1, 707c7c1
==> coredns [be5400fb900edbe189897a7de08048ea310120f2147e220b70ef98b37ddd44bc] <==
maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
.:53
[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
CoreDNS-1.12.1
linux/arm64, go1.24.1, 707c7c1
==> describe nodes <==
Name: functional-125117
Roles: control-plane
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/arch=arm64
kubernetes.io/hostname=functional-125117
kubernetes.io/os=linux
minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
minikube.k8s.io/name=functional-125117
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_12_19T05_56_07_0700
minikube.k8s.io/version=v1.37.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Fri, 19 Dec 2025 05:56:03 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: functional-125117
AcquireTime: <unset>
RenewTime: Fri, 19 Dec 2025 05:57:17 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Fri, 19 Dec 2025 05:57:07 +0000 Fri, 19 Dec 2025 05:55:58 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Fri, 19 Dec 2025 05:57:07 +0000 Fri, 19 Dec 2025 05:55:58 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Fri, 19 Dec 2025 05:57:07 +0000 Fri, 19 Dec 2025 05:55:58 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Fri, 19 Dec 2025 05:57:07 +0000 Fri, 19 Dec 2025 05:56:03 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.49.2
Hostname: functional-125117
Capacity:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022300Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022300Ki
pods: 110
System Info:
Machine ID: 02ff784b806e34735a6e229a69428228
System UUID: 3f17a1b0-8787-4e17-96fe-db8f8d00c153
Boot ID: 03591113-7af0-4522-8acc-d2a56f93f0cf
Kernel Version: 5.15.0-1084-aws
OS Image: Debian GNU/Linux 12 (bookworm)
Operating System: linux
Architecture: arm64
Container Runtime Version: containerd://2.2.0
Kubelet Version: v1.34.3
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (18 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default hello-node-75c85bcc94-b9fdv 0 (0%) 0 (0%) 0 (0%) 0 (0%) 36s
default hello-node-connect-7d85dfc575-fknbj 0 (0%) 0 (0%) 0 (0%) 0 (0%) 45s
default nginx-svc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 54s
default sp-pod 0 (0%) 0 (0%) 0 (0%) 0 (0%) 41s
kube-system coredns-66bc5c9577-jvgjh 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 72s
kube-system coredns-66bc5c9577-s2j74 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 72s
kube-system etcd-functional-125117 100m (5%) 0 (0%) 100Mi (1%) 0 (0%) 77s
kube-system kindnet-xm479 100m (5%) 100m (5%) 50Mi (0%) 50Mi (0%) 72s
kube-system kube-apiserver-functional-125117 250m (12%) 0 (0%) 0 (0%) 0 (0%) 77s
kube-system kube-controller-manager-functional-125117 200m (10%) 0 (0%) 0 (0%) 0 (0%) 77s
kube-system kube-proxy-rglch 0 (0%) 0 (0%) 0 (0%) 0 (0%) 72s
kube-system kube-scheduler-functional-125117 100m (5%) 0 (0%) 0 (0%) 0 (0%) 78s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 71s
kubernetes-dashboard kubernetes-dashboard-api-5f6dd64f4-wz2l2 100m (5%) 250m (12%) 200Mi (2%) 400Mi (5%) 16s
kubernetes-dashboard kubernetes-dashboard-auth-55fb9bbdf8-4b2hl 100m (5%) 250m (12%) 200Mi (2%) 400Mi (5%) 16s
kubernetes-dashboard kubernetes-dashboard-kong-9849c64bd-527k2 0 (0%) 0 (0%) 0 (0%) 0 (0%) 16s
kubernetes-dashboard kubernetes-dashboard-metrics-scraper-7685fd8b77-nrkjq 100m (5%) 250m (12%) 200Mi (2%) 400Mi (5%) 16s
kubernetes-dashboard kubernetes-dashboard-web-5c9f966b98-wllqv 100m (5%) 250m (12%) 200Mi (2%) 400Mi (5%) 16s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 1350m (67%) 1100m (55%)
memory 1090Mi (13%) 1990Mi (25%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
hugepages-32Mi 0 (0%) 0 (0%)
hugepages-64Ki 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 69s kube-proxy
Normal NodeAllocatableEnforced 86s kubelet Updated Node Allocatable limit across pods
Warning CgroupV1 86s kubelet cgroup v1 support is in maintenance mode, please migrate to cgroup v2
Normal NodeHasSufficientMemory 86s (x8 over 86s) kubelet Node functional-125117 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 86s (x8 over 86s) kubelet Node functional-125117 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 86s (x7 over 86s) kubelet Node functional-125117 status is now: NodeHasSufficientPID
Normal Starting 86s kubelet Starting kubelet.
Normal Starting 77s kubelet Starting kubelet.
Warning CgroupV1 77s kubelet cgroup v1 support is in maintenance mode, please migrate to cgroup v2
Normal NodeAllocatableEnforced 77s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 77s kubelet Node functional-125117 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 77s kubelet Node functional-125117 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 77s kubelet Node functional-125117 status is now: NodeHasSufficientPID
Normal RegisteredNode 73s node-controller Node functional-125117 event: Registered Node functional-125117 in Controller
==> dmesg <==
[Dec19 04:47] overlayfs: idmapped layers are currently not supported
[Dec19 04:48] overlayfs: idmapped layers are currently not supported
[Dec19 04:49] overlayfs: idmapped layers are currently not supported
[Dec19 04:51] overlayfs: idmapped layers are currently not supported
[Dec19 04:53] overlayfs: idmapped layers are currently not supported
[Dec19 05:03] overlayfs: idmapped layers are currently not supported
[Dec19 05:04] overlayfs: idmapped layers are currently not supported
[Dec19 05:05] overlayfs: idmapped layers are currently not supported
[Dec19 05:06] overlayfs: idmapped layers are currently not supported
[ +12.793339] overlayfs: idmapped layers are currently not supported
[Dec19 05:07] overlayfs: idmapped layers are currently not supported
[Dec19 05:08] overlayfs: idmapped layers are currently not supported
[Dec19 05:09] overlayfs: idmapped layers are currently not supported
[Dec19 05:10] overlayfs: idmapped layers are currently not supported
[Dec19 05:11] overlayfs: idmapped layers are currently not supported
[Dec19 05:13] overlayfs: idmapped layers are currently not supported
[Dec19 05:14] overlayfs: idmapped layers are currently not supported
[Dec19 05:32] overlayfs: idmapped layers are currently not supported
[Dec19 05:33] overlayfs: idmapped layers are currently not supported
[Dec19 05:35] overlayfs: idmapped layers are currently not supported
[Dec19 05:36] overlayfs: idmapped layers are currently not supported
[Dec19 05:38] overlayfs: idmapped layers are currently not supported
[Dec19 05:39] overlayfs: idmapped layers are currently not supported
[Dec19 05:40] overlayfs: idmapped layers are currently not supported
[Dec19 05:42] kauditd_printk_skb: 8 callbacks suppressed
==> etcd [e6ec1e0e37ba648400fccb6fa32f0cbf03a7dcb4974ea051aa2b9851579e4135] <==
{"level":"warn","ts":"2025-12-19T05:56:02.566494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52630","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T05:56:02.597144Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52638","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T05:56:02.604939Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52666","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T05:56:02.635242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52678","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T05:56:02.655305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52710","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T05:56:02.665980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52718","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T05:56:02.689901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52724","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T05:56:02.701548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52746","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T05:56:02.727470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52764","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T05:56:02.752998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52780","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T05:56:02.769498Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52802","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T05:56:02.792723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52830","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T05:56:02.869320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52848","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T05:57:10.570627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33742","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T05:57:10.595193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33752","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T05:57:10.640955Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33776","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T05:57:10.671108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33808","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T05:57:10.703724Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33814","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T05:57:10.739955Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33832","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T05:57:10.814597Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33872","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T05:57:10.818625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33858","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T05:57:10.829746Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33878","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T05:57:10.857735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33896","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T05:57:10.876226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33918","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T05:57:10.913117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33936","server-name":"","error":"EOF"}
==> kernel <==
05:57:24 up 10:39, 0 user, load average: 1.90, 1.43, 1.58
Linux functional-125117 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
==> kindnet [1695d1b349c32a60a186f9243745c83c35a6445dff4274de4e55673fea205566] <==
I1219 05:56:14.315180 1 main.go:139] hostIP = 192.168.49.2
podIP = 192.168.49.2
I1219 05:56:14.315587 1 main.go:148] setting mtu 1500 for CNI
I1219 05:56:14.315726 1 main.go:178] kindnetd IP family: "ipv4"
I1219 05:56:14.315771 1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
time="2025-12-19T05:56:14Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
I1219 05:56:14.525748 1 controller.go:377] "Starting controller" name="kube-network-policies"
I1219 05:56:14.525859 1 controller.go:381] "Waiting for informer caches to sync"
I1219 05:56:14.525890 1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
I1219 05:56:14.526226 1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
I1219 05:56:14.728860 1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
I1219 05:56:14.728887 1 metrics.go:72] Registering metrics
I1219 05:56:14.728941 1 controller.go:711] "Syncing nftables rules"
I1219 05:56:24.523205 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1219 05:56:24.523277 1 main.go:301] handling current node
I1219 05:56:34.523857 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1219 05:56:34.523917 1 main.go:301] handling current node
I1219 05:56:44.523829 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1219 05:56:44.523892 1 main.go:301] handling current node
I1219 05:56:54.523942 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1219 05:56:54.524215 1 main.go:301] handling current node
I1219 05:57:04.523921 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1219 05:57:04.524155 1 main.go:301] handling current node
I1219 05:57:14.523165 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1219 05:57:14.523195 1 main.go:301] handling current node
==> kube-apiserver [1b11907af61cdf1d977c26132016a1b5f9944bc16adc124eeabcf1127edd2cdc] <==
I1219 05:57:04.041152 1 handler.go:285] Adding GroupVersion configuration.konghq.com v1beta1 to ResourceManager
I1219 05:57:04.072402 1 handler.go:285] Adding GroupVersion configuration.konghq.com v1beta1 to ResourceManager
I1219 05:57:04.089798 1 handler.go:285] Adding GroupVersion configuration.konghq.com v1alpha1 to ResourceManager
I1219 05:57:04.106391 1 handler.go:285] Adding GroupVersion configuration.konghq.com v1alpha1 to ResourceManager
I1219 05:57:04.120612 1 handler.go:285] Adding GroupVersion configuration.konghq.com v1alpha1 to ResourceManager
I1219 05:57:04.141074 1 handler.go:285] Adding GroupVersion configuration.konghq.com v1beta1 to ResourceManager
I1219 05:57:04.159888 1 handler.go:285] Adding GroupVersion configuration.konghq.com v1 to ResourceManager
I1219 05:57:04.176191 1 handler.go:285] Adding GroupVersion configuration.konghq.com v1 to ResourceManager
I1219 05:57:06.946022 1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-api" clusterIPs={"IPv4":"10.99.159.156"}
I1219 05:57:06.959518 1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-web" clusterIPs={"IPv4":"10.106.227.80"}
I1219 05:57:06.981086 1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-kong-proxy" clusterIPs={"IPv4":"10.98.222.17"}
I1219 05:57:06.981413 1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.127.44"}
I1219 05:57:06.989979 1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-auth" clusterIPs={"IPv4":"10.96.24.112"}
W1219 05:57:10.560305 1 logging.go:55] [core] [Channel #262 SubChannel #263]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
W1219 05:57:10.593302 1 logging.go:55] [core] [Channel #266 SubChannel #267]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
W1219 05:57:10.638021 1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
W1219 05:57:10.670781 1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
W1219 05:57:10.696237 1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
W1219 05:57:10.739571 1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
W1219 05:57:10.781937 1 logging.go:55] [core] [Channel #286 SubChannel #287]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
W1219 05:57:10.807731 1 logging.go:55] [core] [Channel #290 SubChannel #291]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
W1219 05:57:10.829149 1 logging.go:55] [core] [Channel #294 SubChannel #295]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
W1219 05:57:10.854340 1 logging.go:55] [core] [Channel #298 SubChannel #299]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
W1219 05:57:10.876418 1 logging.go:55] [core] [Channel #302 SubChannel #303]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
W1219 05:57:10.907481 1 logging.go:55] [core] [Channel #306 SubChannel #307]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
==> kube-controller-manager [b4205502d1709aae1862386bc1dc7daa23c91a9a8f3792a70c5175df051a34cb] <==
I1219 05:56:10.590286 1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
I1219 05:56:10.590323 1 shared_informer.go:356] "Caches are synced" controller="disruption"
I1219 05:56:10.590358 1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
I1219 05:56:10.590372 1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
I1219 05:56:10.590411 1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
I1219 05:56:10.590545 1 shared_informer.go:356] "Caches are synced" controller="cronjob"
I1219 05:56:10.590589 1 shared_informer.go:356] "Caches are synced" controller="job"
I1219 05:56:10.590665 1 shared_informer.go:356] "Caches are synced" controller="stateful set"
I1219 05:56:10.590831 1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
I1219 05:56:10.591165 1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="functional-125117" podCIDRs=["10.244.0.0/24"]
I1219 05:56:10.599902 1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
I1219 05:56:10.609827 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
I1219 05:57:10.550562 1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="udpingresses.configuration.konghq.com"
I1219 05:57:10.550608 1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="kongconsumergroups.configuration.konghq.com"
I1219 05:57:10.550631 1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingressclassparameterses.configuration.konghq.com"
I1219 05:57:10.550652 1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="kongcustomentities.configuration.konghq.com"
I1219 05:57:10.550673 1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="kongupstreampolicies.configuration.konghq.com"
I1219 05:57:10.550689 1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="tcpingresses.configuration.konghq.com"
I1219 05:57:10.550710 1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="kongingresses.configuration.konghq.com"
I1219 05:57:10.550777 1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="kongconsumers.configuration.konghq.com"
I1219 05:57:10.550798 1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="kongplugins.configuration.konghq.com"
I1219 05:57:10.550877 1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
I1219 05:57:10.628892 1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
I1219 05:57:11.951915 1 shared_informer.go:356] "Caches are synced" controller="resource quota"
I1219 05:57:12.029591 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
==> kube-proxy [616e08daf003aa2bdaeae29a78794eb5d850dab46453b3f06af3b916bb8aca9f] <==
I1219 05:56:14.079436 1 server_linux.go:53] "Using iptables proxy"
I1219 05:56:14.190092 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I1219 05:56:14.290220 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1219 05:56:14.290298 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
E1219 05:56:14.290540 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1219 05:56:14.310973 1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I1219 05:56:14.311038 1 server_linux.go:132] "Using iptables Proxier"
I1219 05:56:14.319490 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1219 05:56:14.319922 1 server.go:527] "Version info" version="v1.34.3"
I1219 05:56:14.320425 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1219 05:56:14.322923 1 config.go:200] "Starting service config controller"
I1219 05:56:14.323005 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1219 05:56:14.323115 1 config.go:106] "Starting endpoint slice config controller"
I1219 05:56:14.323160 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1219 05:56:14.323243 1 config.go:403] "Starting serviceCIDR config controller"
I1219 05:56:14.323280 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1219 05:56:14.324235 1 config.go:309] "Starting node config controller"
I1219 05:56:14.324308 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1219 05:56:14.324405 1 shared_informer.go:356] "Caches are synced" controller="node config"
I1219 05:56:14.424997 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
I1219 05:56:14.425146 1 shared_informer.go:356] "Caches are synced" controller="service config"
I1219 05:56:14.425314 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
==> kube-scheduler [80f5fc9caf3e4756bcd590fcb7b88ee219fc39b293345a00de18c3859326a6eb] <==
E1219 05:56:03.641415 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E1219 05:56:03.641661 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
E1219 05:56:03.641846 1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
E1219 05:56:03.642007 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
E1219 05:56:03.642159 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1219 05:56:03.642334 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1219 05:56:03.642489 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E1219 05:56:03.642670 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
E1219 05:56:04.459604 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
E1219 05:56:04.479265 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1219 05:56:04.503174 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
E1219 05:56:04.529209 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E1219 05:56:04.545665 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
E1219 05:56:04.598777 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
E1219 05:56:04.608720 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1219 05:56:04.618361 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1219 05:56:04.665006 1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
E1219 05:56:04.738829 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
E1219 05:56:04.772569 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
E1219 05:56:04.806122 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
E1219 05:56:04.817648 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E1219 05:56:04.832812 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
E1219 05:56:04.856331 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
E1219 05:56:04.865034 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
I1219 05:56:06.702294 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
==> kubelet <==
Dec 19 05:57:06 functional-125117 kubelet[16669]: E1219 05:57:06.432669 16669 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"76c58cb980cd99548d8e38bf12a2d083cbce3ddb4d6369b981fa6103aae67cae\": not found" containerID="76c58cb980cd99548d8e38bf12a2d083cbce3ddb4d6369b981fa6103aae67cae"
Dec 19 05:57:06 functional-125117 kubelet[16669]: I1219 05:57:06.432706 16669 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="76c58cb980cd99548d8e38bf12a2d083cbce3ddb4d6369b981fa6103aae67cae" err="rpc error: code = NotFound desc = an error occurred when try to find container \"76c58cb980cd99548d8e38bf12a2d083cbce3ddb4d6369b981fa6103aae67cae\": not found"
Dec 19 05:57:07 functional-125117 kubelet[16669]: I1219 05:57:07.373175 16669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jpzk\" (UniqueName: \"kubernetes.io/projected/c81fd795-c18d-48e3-8218-8a686d492c5d-kube-api-access-2jpzk\") pod \"kubernetes-dashboard-metrics-scraper-7685fd8b77-nrkjq\" (UID: \"c81fd795-c18d-48e3-8218-8a686d492c5d\") " pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-7685fd8b77-nrkjq"
Dec 19 05:57:07 functional-125117 kubelet[16669]: I1219 05:57:07.373238 16669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c81fd795-c18d-48e3-8218-8a686d492c5d-tmp-volume\") pod \"kubernetes-dashboard-metrics-scraper-7685fd8b77-nrkjq\" (UID: \"c81fd795-c18d-48e3-8218-8a686d492c5d\") " pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-7685fd8b77-nrkjq"
Dec 19 05:57:07 functional-125117 kubelet[16669]: I1219 05:57:07.486025 16669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kong-custom-dbless-config-volume\" (UniqueName: \"kubernetes.io/configmap/bd3b7e2e-bcbb-4978-902e-5558b3b23bfc-kong-custom-dbless-config-volume\") pod \"kubernetes-dashboard-kong-9849c64bd-527k2\" (UID: \"bd3b7e2e-bcbb-4978-902e-5558b3b23bfc\") " pod="kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-527k2"
Dec 19 05:57:07 functional-125117 kubelet[16669]: I1219 05:57:07.497512 16669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5hpw\" (UniqueName: \"kubernetes.io/projected/40774d56-9b04-4276-ba8e-be42826d0b69-kube-api-access-g5hpw\") pod \"kubernetes-dashboard-api-5f6dd64f4-wz2l2\" (UID: \"40774d56-9b04-4276-ba8e-be42826d0b69\") " pod="kubernetes-dashboard/kubernetes-dashboard-api-5f6dd64f4-wz2l2"
Dec 19 05:57:07 functional-125117 kubelet[16669]: I1219 05:57:07.504678 16669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d59727e7-0af4-4692-98bb-ec7f65c9f87a-tmp-volume\") pod \"kubernetes-dashboard-web-5c9f966b98-wllqv\" (UID: \"d59727e7-0af4-4692-98bb-ec7f65c9f87a\") " pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-wllqv"
Dec 19 05:57:07 functional-125117 kubelet[16669]: I1219 05:57:07.504961 16669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/40774d56-9b04-4276-ba8e-be42826d0b69-tmp-volume\") pod \"kubernetes-dashboard-api-5f6dd64f4-wz2l2\" (UID: \"40774d56-9b04-4276-ba8e-be42826d0b69\") " pod="kubernetes-dashboard/kubernetes-dashboard-api-5f6dd64f4-wz2l2"
Dec 19 05:57:07 functional-125117 kubelet[16669]: I1219 05:57:07.505099 16669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubernetes-dashboard-kong-prefix-dir\" (UniqueName: \"kubernetes.io/empty-dir/bd3b7e2e-bcbb-4978-902e-5558b3b23bfc-kubernetes-dashboard-kong-prefix-dir\") pod \"kubernetes-dashboard-kong-9849c64bd-527k2\" (UID: \"bd3b7e2e-bcbb-4978-902e-5558b3b23bfc\") " pod="kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-527k2"
Dec 19 05:57:07 functional-125117 kubelet[16669]: I1219 05:57:07.505233 16669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubernetes-dashboard-kong-tmp\" (UniqueName: \"kubernetes.io/empty-dir/bd3b7e2e-bcbb-4978-902e-5558b3b23bfc-kubernetes-dashboard-kong-tmp\") pod \"kubernetes-dashboard-kong-9849c64bd-527k2\" (UID: \"bd3b7e2e-bcbb-4978-902e-5558b3b23bfc\") " pod="kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-527k2"
Dec 19 05:57:07 functional-125117 kubelet[16669]: I1219 05:57:07.505374 16669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qrg5\" (UniqueName: \"kubernetes.io/projected/d59727e7-0af4-4692-98bb-ec7f65c9f87a-kube-api-access-5qrg5\") pod \"kubernetes-dashboard-web-5c9f966b98-wllqv\" (UID: \"d59727e7-0af4-4692-98bb-ec7f65c9f87a\") " pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-wllqv"
Dec 19 05:57:07 functional-125117 kubelet[16669]: I1219 05:57:07.606591 16669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/64090409-d0e5-4576-b8f0-e77c4b56c9f4-tmp-volume\") pod \"kubernetes-dashboard-auth-55fb9bbdf8-4b2hl\" (UID: \"64090409-d0e5-4576-b8f0-e77c4b56c9f4\") " pod="kubernetes-dashboard/kubernetes-dashboard-auth-55fb9bbdf8-4b2hl"
Dec 19 05:57:07 functional-125117 kubelet[16669]: I1219 05:57:07.606707 16669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vdc7\" (UniqueName: \"kubernetes.io/projected/64090409-d0e5-4576-b8f0-e77c4b56c9f4-kube-api-access-6vdc7\") pod \"kubernetes-dashboard-auth-55fb9bbdf8-4b2hl\" (UID: \"64090409-d0e5-4576-b8f0-e77c4b56c9f4\") " pod="kubernetes-dashboard/kubernetes-dashboard-auth-55fb9bbdf8-4b2hl"
Dec 19 05:57:16 functional-125117 kubelet[16669]: I1219 05:57:16.821597 16669 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"2","ephemeral-storage":"203034800Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","hugepages-32Mi":"0","hugepages-64Ki":"0","memory":"8022300Ki","pods":"110"}
Dec 19 05:57:16 functional-125117 kubelet[16669]: I1219 05:57:16.821698 16669 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"2","ephemeral-storage":"203034800Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","hugepages-32Mi":"0","hugepages-64Ki":"0","memory":"8022300Ki","pods":"110"}
Dec 19 05:57:17 functional-125117 kubelet[16669]: I1219 05:57:17.749742 16669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-527k2" podStartSLOduration=3.693364792 podStartE2EDuration="10.749720965s" podCreationTimestamp="2025-12-19 05:57:07 +0000 UTC" firstStartedPulling="2025-12-19 05:57:08.072011279 +0000 UTC m=+61.933041592" lastFinishedPulling="2025-12-19 05:57:15.128367452 +0000 UTC m=+68.989397765" observedRunningTime="2025-12-19 05:57:17.718313133 +0000 UTC m=+71.579343462" watchObservedRunningTime="2025-12-19 05:57:17.749720965 +0000 UTC m=+71.610751278"
Dec 19 05:57:21 functional-125117 kubelet[16669]: I1219 05:57:21.122150 16669 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"2","ephemeral-storage":"203034800Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","hugepages-32Mi":"0","hugepages-64Ki":"0","memory":"8022300Ki","pods":"110"}
Dec 19 05:57:21 functional-125117 kubelet[16669]: I1219 05:57:21.122225 16669 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"2","ephemeral-storage":"203034800Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","hugepages-32Mi":"0","hugepages-64Ki":"0","memory":"8022300Ki","pods":"110"}
Dec 19 05:57:21 functional-125117 kubelet[16669]: I1219 05:57:21.761389 16669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-api-5f6dd64f4-wz2l2" podStartSLOduration=6.060692172 podStartE2EDuration="14.761369975s" podCreationTimestamp="2025-12-19 05:57:07 +0000 UTC" firstStartedPulling="2025-12-19 05:57:08.120715714 +0000 UTC m=+61.981746026" lastFinishedPulling="2025-12-19 05:57:16.821393516 +0000 UTC m=+70.682423829" observedRunningTime="2025-12-19 05:57:17.750176626 +0000 UTC m=+71.611206955" watchObservedRunningTime="2025-12-19 05:57:21.761369975 +0000 UTC m=+75.622400288"
Dec 19 05:57:22 functional-125117 kubelet[16669]: I1219 05:57:22.197856 16669 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"2","ephemeral-storage":"203034800Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","hugepages-32Mi":"0","hugepages-64Ki":"0","memory":"8022300Ki","pods":"110"}
Dec 19 05:57:22 functional-125117 kubelet[16669]: I1219 05:57:22.197955 16669 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"2","ephemeral-storage":"203034800Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","hugepages-32Mi":"0","hugepages-64Ki":"0","memory":"8022300Ki","pods":"110"}
Dec 19 05:57:22 functional-125117 kubelet[16669]: I1219 05:57:22.769765 16669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-7685fd8b77-nrkjq" podStartSLOduration=1.866909063 podStartE2EDuration="15.769724862s" podCreationTimestamp="2025-12-19 05:57:07 +0000 UTC" firstStartedPulling="2025-12-19 05:57:08.294783906 +0000 UTC m=+62.155814218" lastFinishedPulling="2025-12-19 05:57:22.197599696 +0000 UTC m=+76.058630017" observedRunningTime="2025-12-19 05:57:22.769431074 +0000 UTC m=+76.630461420" watchObservedRunningTime="2025-12-19 05:57:22.769724862 +0000 UTC m=+76.630755232"
Dec 19 05:57:22 functional-125117 kubelet[16669]: I1219 05:57:22.770476 16669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-wllqv" podStartSLOduration=2.797449284 podStartE2EDuration="15.770463241s" podCreationTimestamp="2025-12-19 05:57:07 +0000 UTC" firstStartedPulling="2025-12-19 05:57:08.148574323 +0000 UTC m=+62.009604636" lastFinishedPulling="2025-12-19 05:57:21.12158828 +0000 UTC m=+74.982618593" observedRunningTime="2025-12-19 05:57:21.761037902 +0000 UTC m=+75.622068223" watchObservedRunningTime="2025-12-19 05:57:22.770463241 +0000 UTC m=+76.631493562"
Dec 19 05:57:23 functional-125117 kubelet[16669]: I1219 05:57:23.417034 16669 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"2","ephemeral-storage":"203034800Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","hugepages-32Mi":"0","hugepages-64Ki":"0","memory":"8022300Ki","pods":"110"}
Dec 19 05:57:23 functional-125117 kubelet[16669]: I1219 05:57:23.417620 16669 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"2","ephemeral-storage":"203034800Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","hugepages-32Mi":"0","hugepages-64Ki":"0","memory":"8022300Ki","pods":"110"}
==> kubernetes-dashboard [29f9704c03ef922b8caa0b1f4f2a3bfcd7eb0990250215e8c3580c91d26110d8] <==
I1219 05:57:22.360726 1 main.go:43] "Starting Metrics Scraper" version="1.2.2"
W1219 05:57:22.360841 1 client_config.go:667] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I1219 05:57:22.360954 1 main.go:51] Kubernetes host: https://10.96.0.1:443
I1219 05:57:22.360960 1 main.go:52] Namespace(s): []
==> kubernetes-dashboard [89d78fe89c73a68b9128e7228e35a5b9296f313994f16ba47daf809095b14825] <==
I1219 05:57:17.031602 1 main.go:40] "Starting Kubernetes Dashboard API" version="1.14.0"
I1219 05:57:17.031786 1 init.go:49] Using in-cluster config
I1219 05:57:17.032037 1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1219 05:57:17.032049 1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1219 05:57:17.032054 1 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1219 05:57:17.032058 1 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1219 05:57:17.045001 1 main.go:119] "Successful initial request to the apiserver" version="v1.34.3"
I1219 05:57:17.045036 1 client.go:265] Creating in-cluster Sidecar client
I1219 05:57:17.116408 1 main.go:96] "Listening and serving on" address="0.0.0.0:8000"
E1219 05:57:17.119466 1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
==> kubernetes-dashboard [985ee70cda830123987b6d97ba524b9cf90dab1dd81fff6785c740071638f8a4] <==
I1219 05:57:21.342523 1 main.go:37] "Starting Kubernetes Dashboard Web" version="1.7.0"
I1219 05:57:21.342593 1 init.go:48] Using in-cluster config
I1219 05:57:21.342964 1 main.go:57] "Listening and serving insecurely on" address="0.0.0.0:8000"
==> kubernetes-dashboard [d56bac3d8d52fba6a6ff205aef02077569096618ca3b77f900e4c2ac7d44d3b9] <==
I1219 05:57:23.615872 1 main.go:34] "Starting Kubernetes Dashboard Auth" version="1.4.0"
I1219 05:57:23.615974 1 init.go:49] Using in-cluster config
I1219 05:57:23.616097 1 main.go:44] "Listening and serving insecurely on" address="0.0.0.0:8000"
==> storage-provisioner [43e1c32c12ad7b3f9b354b1eb10a2323efadcd73ab44c88d1b5ee33160e6dd2e] <==
W1219 05:56:59.670173 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 05:57:01.673418 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 05:57:01.678181 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 05:57:03.681262 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 05:57:03.686122 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 05:57:05.689882 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 05:57:05.698701 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 05:57:07.722145 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 05:57:07.738520 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 05:57:09.741659 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 05:57:09.746965 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 05:57:11.752062 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 05:57:11.758584 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 05:57:13.762547 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 05:57:13.774586 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 05:57:15.782555 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 05:57:15.789346 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 05:57:17.793431 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 05:57:17.805404 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 05:57:19.808477 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 05:57:19.821654 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 05:57:21.824490 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 05:57:21.829289 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 05:57:23.832493 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 05:57:23.837613 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
-- /stdout --
helpers_test.go:263: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-125117 -n functional-125117
helpers_test.go:270: (dbg) Run: kubectl --context functional-125117 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: busybox-mount
helpers_test.go:283: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:286: (dbg) Run: kubectl --context functional-125117 describe pod busybox-mount
helpers_test.go:291: (dbg) kubectl --context functional-125117 describe pod busybox-mount:
-- stdout --
Name: busybox-mount
Namespace: default
Priority: 0
Service Account: default
Node: functional-125117/192.168.49.2
Start Time: Fri, 19 Dec 2025 05:56:52 +0000
Labels: integration-test=busybox-mount
Annotations: <none>
Status: Succeeded
IP: 10.244.0.11
IPs:
IP: 10.244.0.11
Containers:
mount-munger:
Container ID: containerd://b1ee8306f19fd7b881ca18f2bd82b6bf7345fee7183bdceaac9a98559b6767f9
Image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID: gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
Port: <none>
Host Port: <none>
Command:
/bin/sh
-c
--
Args:
cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
State: Terminated
Reason: Completed
Exit Code: 0
Started: Fri, 19 Dec 2025 05:56:55 +0000
Finished: Fri, 19 Dec 2025 05:56:55 +0000
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/mount-9p from test-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rn4dz (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
test-volume:
Type: HostPath (bare host directory volume)
Path: /mount-9p
HostPathType:
kube-api-access-rn4dz:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 33s default-scheduler Successfully assigned default/busybox-mount to functional-125117
Normal Pulling 33s kubelet Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Normal Pulled 31s kubelet Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.215s (2.215s including waiting). Image size: 1935750 bytes.
Normal Created 31s kubelet Created container: mount-munger
Normal Started 30s kubelet Started container mount-munger
-- /stdout --
helpers_test.go:294: <<< TestFunctional/parallel/DashboardCmd FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/DashboardCmd (23.96s)