=== RUN TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd
=== CONT TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-180941 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-180941 --alsologtostderr -v=1] ...
E1219 02:33:53.777270 257493 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-253859/.minikube/profiles/addons-367973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-180941 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-180941 --alsologtostderr -v=1] stderr:
I1219 02:33:41.209203 299213 out.go:360] Setting OutFile to fd 1 ...
I1219 02:33:41.209660 299213 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:33:41.209669 299213 out.go:374] Setting ErrFile to fd 2...
I1219 02:33:41.209675 299213 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:33:41.209978 299213 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-253859/.minikube/bin
I1219 02:33:41.210381 299213 mustload.go:66] Loading cluster: functional-180941
I1219 02:33:41.210958 299213 config.go:182] Loaded profile config "functional-180941": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1219 02:33:41.211489 299213 cli_runner.go:164] Run: docker container inspect functional-180941 --format={{.State.Status}}
I1219 02:33:41.236321 299213 host.go:66] Checking if "functional-180941" exists ...
I1219 02:33:41.236730 299213 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1219 02:33:41.312560 299213 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-19 02:33:41.300701489 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1219 02:33:41.312751 299213 api_server.go:166] Checking apiserver status ...
I1219 02:33:41.312828 299213 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1219 02:33:41.312904 299213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-180941
I1219 02:33:41.334011 299213 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/functional-180941/id_rsa Username:docker}
I1219 02:33:41.449989 299213 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4944/cgroup
W1219 02:33:41.459958 299213 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4944/cgroup: Process exited with status 1
stdout:
stderr:
I1219 02:33:41.460022 299213 ssh_runner.go:195] Run: ls
I1219 02:33:41.464417 299213 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I1219 02:33:41.470033 299213 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W1219 02:33:41.470084 299213 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1219 02:33:41.470276 299213 config.go:182] Loaded profile config "functional-180941": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1219 02:33:41.470292 299213 addons.go:70] Setting dashboard=true in profile "functional-180941"
I1219 02:33:41.470300 299213 addons.go:239] Setting addon dashboard=true in "functional-180941"
I1219 02:33:41.470326 299213 host.go:66] Checking if "functional-180941" exists ...
I1219 02:33:41.470803 299213 cli_runner.go:164] Run: docker container inspect functional-180941 --format={{.State.Status}}
I1219 02:33:41.491873 299213 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
I1219 02:33:41.491898 299213 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
I1219 02:33:41.491991 299213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-180941
I1219 02:33:41.511817 299213 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22230-253859/.minikube/machines/functional-180941/id_rsa Username:docker}
I1219 02:33:41.627055 299213 ssh_runner.go:195] Run: test -f /usr/bin/helm
I1219 02:33:41.630655 299213 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
I1219 02:33:41.634645 299213 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
I1219 02:33:43.592313 299213 ssh_runner.go:235] Completed: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh": (1.957630594s)
I1219 02:33:43.592411 299213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
I1219 02:33:46.742006 299213 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (3.149546384s)
I1219 02:33:46.742160 299213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
I1219 02:33:46.934400 299213 addons.go:500] Verifying addon dashboard=true in "functional-180941"
I1219 02:33:46.934794 299213 cli_runner.go:164] Run: docker container inspect functional-180941 --format={{.State.Status}}
I1219 02:33:46.955611 299213 out.go:179] * Verifying dashboard addon...
I1219 02:33:46.957234 299213 kapi.go:59] client config for functional-180941: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-180941/client.crt", KeyFile:"/home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-180941/client.key", CAFile:"/home/jenkins/minikube-integration/22230-253859/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2863880), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1219 02:33:46.957773 299213 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1219 02:33:46.957789 299213 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1219 02:33:46.957797 299213 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1219 02:33:46.957804 299213 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1219 02:33:46.957811 299213 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1219 02:33:46.958159 299213 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
I1219 02:33:46.966730 299213 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
I1219 02:33:46.966748 299213 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:33:47.462060 299213 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:33:47.961921 299213 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:33:48.461719 299213 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:33:48.962796 299213 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:33:49.461063 299213 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:33:49.965063 299213 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:33:50.462048 299213 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:33:50.962495 299213 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:33:51.462226 299213 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:33:51.962044 299213 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:33:52.461270 299213 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:33:52.961567 299213 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:33:53.462265 299213 kapi.go:107] duration metric: took 6.504090925s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
I1219 02:33:53.464082 299213 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p functional-180941 addons enable metrics-server
I1219 02:33:53.465599 299213 addons.go:202] Writing out "functional-180941" config to set dashboard=true...
W1219 02:33:53.465908 299213 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1219 02:33:53.466454 299213 kapi.go:59] client config for functional-180941: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-180941/client.crt", KeyFile:"/home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-180941/client.key", CAFile:"/home/jenkins/minikube-integration/22230-253859/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2863880), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1219 02:33:53.469377 299213 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard-kong-proxy kubernetes-dashboard 921f0e42-b2f8-4174-bb8c-c5692c32573d 706 0 2025-12-19 02:33:46 +0000 UTC <nil> <nil> map[app.kubernetes.io/instance:kubernetes-dashboard app.kubernetes.io/managed-by:Helm app.kubernetes.io/name:kong app.kubernetes.io/version:3.9 enable-metrics:true helm.sh/chart:kong-2.52.0] map[meta.helm.sh/release-name:kubernetes-dashboard meta.helm.sh/release-namespace:kubernetes-dashboard] [] [] [{helm Update v1 2025-12-19 02:33:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/name":{},"f:app.kubernetes.io/version":{},"f:enable-metrics":{},"f:helm.sh/chart":{}}},"f:spec":{"f:externalTrafficPolicy":{},"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":443,\"protocol\":\"TCP\"}":{".
":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:kong-proxy-tls,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:31402,AppProtocol:nil,},},Selector:map[string]string{app.kubernetes.io/component: app,app.kubernetes.io/instance: kubernetes-dashboard,app.kubernetes.io/name: kong,},ClusterIP:10.110.21.49,Type:NodePort,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:Cluster,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.110.21.49],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
I1219 02:33:53.469557 299213 host.go:66] Checking if "functional-180941" exists ...
I1219 02:33:53.469813 299213 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-180941
I1219 02:33:53.491956 299213 kapi.go:59] client config for functional-180941: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-180941/client.crt", KeyFile:"/home/jenkins/minikube-integration/22230-253859/.minikube/profiles/functional-180941/client.key", CAFile:"/home/jenkins/minikube-integration/22230-253859/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2863880), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1219 02:33:53.500062 299213 warnings.go:110] "Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice"
I1219 02:33:53.504332 299213 warnings.go:110] "Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice"
I1219 02:33:53.507971 299213 warnings.go:110] "Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice"
I1219 02:33:53.511865 299213 warnings.go:110] "Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice"
I1219 02:33:53.695759 299213 warnings.go:110] "Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice"
I1219 02:33:53.764576 299213 out.go:179] * Dashboard Token:
I1219 02:33:53.765743 299213 out.go:203] eyJhbGciOiJSUzI1NiIsImtpZCI6IkdIQ2RYQnNHYlpmOXZ6dnllWm9RQThuZGJkUm1WS3ZpUHI4N2lIZlVGWUkifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzY2MTk4MDMzLCJpYXQiOjE3NjYxMTE2MzMsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiNGM4OTU1ZTMtMTRlMy00YjYyLWIxN2ItNThjNzdhMzQ5MzQ4Iiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiYWRhNmUwODEtMjI1Ny00OGY3LWFkOWQtMzljMmY2ZDI4MWY0In19LCJuYmYiOjE3NjYxMTE2MzMsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbi11c2VyIn0.wfL9uA2AWerKmwno28vMTFfBuA6xfVbRzwZCcNdJyQiFcGEEPqa9lX3L3s1wZtE4q_b-pYqYhGLlGCRMcMPRUbLKgcihWLF_UOQjLxKgtaeTEDEbW6vmIuE0c8-my2Yh2dl5TyR-lBH_ACHEbHPJr3CtMVgNFrUmyh__8IniP6e2CNIbbLm0ABPxEvaTPAezJVYY2vTImUHI7PyMzVU7ewgFwQTXKs6jyTres3RpqxpYU14pGzVGd48PHXp9Gw8LPwbZbjqsHOTUXVs_f4UX0zmevWspcwi1Kxy-vSB6XZm8c2fIQn8DQ2OIR8Nbv4Tq-oX0EYq8SLCKyoGcIX2a4w
I1219 02:33:53.766835 299213 out.go:203] https://192.168.49.2:31402
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run: docker inspect functional-180941
helpers_test.go:244: (dbg) docker inspect functional-180941:
-- stdout --
[
{
"Id": "699821475ce793e6c625880f5227e89664f4e29f8122eb0271b1539181ecf7e6",
"Created": "2025-12-19T02:32:01.339298956Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 288607,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-12-19T02:32:01.370634155Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:e3abeb065413b7566dd42e98e204ab3ad174790743f1f5cd427036c11b49d7f1",
"ResolvConfPath": "/var/lib/docker/containers/699821475ce793e6c625880f5227e89664f4e29f8122eb0271b1539181ecf7e6/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/699821475ce793e6c625880f5227e89664f4e29f8122eb0271b1539181ecf7e6/hostname",
"HostsPath": "/var/lib/docker/containers/699821475ce793e6c625880f5227e89664f4e29f8122eb0271b1539181ecf7e6/hosts",
"LogPath": "/var/lib/docker/containers/699821475ce793e6c625880f5227e89664f4e29f8122eb0271b1539181ecf7e6/699821475ce793e6c625880f5227e89664f4e29f8122eb0271b1539181ecf7e6-json.log",
"Name": "/functional-180941",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"functional-180941:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {
"max-size": "100m"
}
},
"NetworkMode": "functional-180941",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "private",
"Dns": null,
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 4294967296,
"NanoCpus": 0,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 8589934592,
"MemorySwappiness": null,
"OomKillDisable": null,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "699821475ce793e6c625880f5227e89664f4e29f8122eb0271b1539181ecf7e6",
"LowerDir": "/var/lib/docker/overlay2/20d7209263a8a38c252f234112195acbf382e41774c953095e95d009678a5e45-init/diff:/var/lib/docker/overlay2/68e8325308c9e4650215fd35d4b00e1f54e6ac5929641a1bc8ed2d512448afbd/diff",
"MergedDir": "/var/lib/docker/overlay2/20d7209263a8a38c252f234112195acbf382e41774c953095e95d009678a5e45/merged",
"UpperDir": "/var/lib/docker/overlay2/20d7209263a8a38c252f234112195acbf382e41774c953095e95d009678a5e45/diff",
"WorkDir": "/var/lib/docker/overlay2/20d7209263a8a38c252f234112195acbf382e41774c953095e95d009678a5e45/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "functional-180941",
"Source": "/var/lib/docker/volumes/functional-180941/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "functional-180941",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8441/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "functional-180941",
"name.minikube.sigs.k8s.io": "functional-180941",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"SandboxID": "83807b2cc2304c7bab0a19f825c7badb9a243f11bb649fd461be870ef5dee81b",
"SandboxKey": "/var/run/docker/netns/83807b2cc230",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32783"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32784"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32787"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32785"
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32786"
}
]
},
"Networks": {
"functional-180941": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2",
"IPv6Address": ""
},
"Links": null,
"Aliases": null,
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "52ec214e446330d29bad9b1e27048dbc798e4000f73ead459831b48f3a7830ec",
"EndpointID": "630c627811ac293db643c1e480a06874e14f3ebd0b6fa9fdaa6add6f5ea3b93d",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"MacAddress": "a6:1b:ac:2a:b6:58",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"functional-180941",
"699821475ce7"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:248: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p functional-180941 -n functional-180941
helpers_test.go:253: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run: out/minikube-linux-amd64 -p functional-180941 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p functional-180941 logs -n 25: (1.748738073s)
helpers_test.go:261: TestFunctional/parallel/DashboardCmd logs:
-- stdout --
==> Audit <==
┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ ssh │ functional-180941 ssh sudo umount -f /mount-9p │ functional-180941 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │ │
│ image │ functional-180941 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr │ functional-180941 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │ 19 Dec 25 02:33 UTC │
│ mount │ -p functional-180941 /tmp/TestFunctionalparallelMountCmdVerifyCleanup437824581/001:/mount3 --alsologtostderr -v=1 │ functional-180941 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │ │
│ ssh │ functional-180941 ssh findmnt -T /mount1 │ functional-180941 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │ │
│ mount │ -p functional-180941 /tmp/TestFunctionalparallelMountCmdVerifyCleanup437824581/001:/mount1 --alsologtostderr -v=1 │ functional-180941 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │ │
│ service │ functional-180941 service list -o json │ functional-180941 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │ 19 Dec 25 02:33 UTC │
│ image │ functional-180941 image ls │ functional-180941 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │ 19 Dec 25 02:33 UTC │
│ image │ functional-180941 image save --daemon kicbase/echo-server:functional-180941 --alsologtostderr │ functional-180941 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │ 19 Dec 25 02:33 UTC │
│ ssh │ functional-180941 ssh findmnt -T /mount1 │ functional-180941 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │ 19 Dec 25 02:33 UTC │
│ ssh │ functional-180941 ssh findmnt -T /mount2 │ functional-180941 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │ 19 Dec 25 02:33 UTC │
│ ssh │ functional-180941 ssh findmnt -T /mount3 │ functional-180941 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │ 19 Dec 25 02:33 UTC │
│ mount │ -p functional-180941 --kill=true │ functional-180941 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │ │
│ service │ functional-180941 service --namespace=default --https --url hello-node │ functional-180941 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │ 19 Dec 25 02:33 UTC │
│ ssh │ functional-180941 ssh echo hello │ functional-180941 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │ 19 Dec 25 02:33 UTC │
│ ssh │ functional-180941 ssh cat /etc/hostname │ functional-180941 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │ 19 Dec 25 02:33 UTC │
│ service │ functional-180941 service hello-node --url --format={{.IP}} │ functional-180941 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │ 19 Dec 25 02:33 UTC │
│ tunnel │ functional-180941 tunnel --alsologtostderr │ functional-180941 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │ │
│ tunnel │ functional-180941 tunnel --alsologtostderr │ functional-180941 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │ │
│ service │ functional-180941 service hello-node --url │ functional-180941 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │ 19 Dec 25 02:33 UTC │
│ tunnel │ functional-180941 tunnel --alsologtostderr │ functional-180941 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │ │
│ ssh │ functional-180941 ssh sudo cat /etc/ssl/certs/257493.pem │ functional-180941 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │ 19 Dec 25 02:33 UTC │
│ ssh │ functional-180941 ssh sudo cat /usr/share/ca-certificates/257493.pem │ functional-180941 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │ 19 Dec 25 02:33 UTC │
│ ssh │ functional-180941 ssh sudo cat /etc/ssl/certs/51391683.0 │ functional-180941 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │ 19 Dec 25 02:33 UTC │
│ ssh │ functional-180941 ssh sudo cat /etc/ssl/certs/2574932.pem │ functional-180941 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │ 19 Dec 25 02:33 UTC │
│ ssh │ functional-180941 ssh sudo cat /usr/share/ca-certificates/2574932.pem │ functional-180941 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │ │
└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/12/19 02:33:41
Running on machine: ubuntu-20-agent-10
Binary: Built with gc go1.25.5 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1219 02:33:41.191346 299204 out.go:360] Setting OutFile to fd 1 ...
I1219 02:33:41.191648 299204 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:33:41.191660 299204 out.go:374] Setting ErrFile to fd 2...
I1219 02:33:41.191668 299204 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:33:41.191920 299204 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-253859/.minikube/bin
I1219 02:33:41.192389 299204 out.go:368] Setting JSON to false
I1219 02:33:41.193782 299204 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4560,"bootTime":1766107061,"procs":233,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1219 02:33:41.193859 299204 start.go:143] virtualization: kvm guest
I1219 02:33:41.199236 299204 out.go:179] * [functional-180941] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
I1219 02:33:41.200720 299204 out.go:179] - MINIKUBE_LOCATION=22230
I1219 02:33:41.200747 299204 notify.go:221] Checking for updates...
I1219 02:33:41.203200 299204 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1219 02:33:41.204606 299204 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22230-253859/kubeconfig
I1219 02:33:41.205808 299204 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-253859/.minikube
I1219 02:33:41.206791 299204 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I1219 02:33:41.207865 299204 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1219 02:33:41.209608 299204 config.go:182] Loaded profile config "functional-180941": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1219 02:33:41.210542 299204 driver.go:422] Setting default libvirt URI to qemu:///system
I1219 02:33:41.241530 299204 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
I1219 02:33:41.241678 299204 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1219 02:33:41.313040 299204 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-19 02:33:41.300701489 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1219 02:33:41.313183 299204 docker.go:319] overlay module found
I1219 02:33:41.319388 299204 out.go:179] * Using the docker driver based on existing profile
I1219 02:33:41.320225 299204 start.go:309] selected driver: docker
I1219 02:33:41.320241 299204 start.go:928] validating driver "docker" against &{Name:functional-180941 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:functional-180941 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1219 02:33:41.320366 299204 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1219 02:33:41.320455 299204 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1219 02:33:41.381136 299204 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-19 02:33:41.371469379 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1219 02:33:41.381942 299204 cni.go:84] Creating CNI manager for ""
I1219 02:33:41.382036 299204 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1219 02:33:41.382110 299204 start.go:353] cluster config:
{Name:functional-180941 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:functional-180941 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1219 02:33:41.383682 299204 out.go:179] * dry-run validation complete!
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
5e345c165d4a4 59f642f485d26 2 seconds ago Running kubernetes-dashboard-web 0 c4a3939a31d77 kubernetes-dashboard-web-5c9f966b98-htfj4 kubernetes-dashboard
01e01e1796cdd 56cc512116c8f 10 seconds ago Exited mount-munger 0 452388232e3cd busybox-mount default
e5a930fb2ef9c 9056ab77afb8e 13 seconds ago Running echo-server 0 e41de8f72e96e hello-node-75c85bcc94-4z4zz default
b39cd9d618557 5826b25d990d7 41 seconds ago Running kube-controller-manager 2 034998a26aa20 kube-controller-manager-functional-180941 kube-system
14ac2a6e39974 aa27095f56193 41 seconds ago Running kube-apiserver 0 8909f53b69946 kube-apiserver-functional-180941 kube-system
978c221369557 a3e246e9556e9 41 seconds ago Running etcd 1 6e2f053cb8397 etcd-functional-180941 kube-system
73020ed8c20ec aec12dadf56dd 52 seconds ago Running kube-scheduler 1 b59d9640c4b37 kube-scheduler-functional-180941 kube-system
d333d965ea2cd 36eef8e07bdd6 52 seconds ago Running kube-proxy 1 cce762d72cd44 kube-proxy-j855q kube-system
e7f1ab2e451e8 5826b25d990d7 52 seconds ago Exited kube-controller-manager 1 034998a26aa20 kube-controller-manager-functional-180941 kube-system
3d1f591b08793 6e38f40d628db 53 seconds ago Running storage-provisioner 1 104cc4521319d storage-provisioner kube-system
d9d9e1018c454 52546a367cc9e 53 seconds ago Running coredns 1 1369879608ca1 coredns-66bc5c9577-wzv8l kube-system
9026560daa2d2 4921d7a6dffa9 53 seconds ago Running kindnet-cni 1 ddc66894c9955 kindnet-xh25x kube-system
009e96b5d79ed 52546a367cc9e About a minute ago Exited coredns 0 1369879608ca1 coredns-66bc5c9577-wzv8l kube-system
4d325bd49d2ae 6e38f40d628db About a minute ago Exited storage-provisioner 0 104cc4521319d storage-provisioner kube-system
8eded8e1cc2de 4921d7a6dffa9 About a minute ago Exited kindnet-cni 0 ddc66894c9955 kindnet-xh25x kube-system
296dd104376b4 36eef8e07bdd6 About a minute ago Exited kube-proxy 0 cce762d72cd44 kube-proxy-j855q kube-system
c64dcbfa4c96d aec12dadf56dd About a minute ago Exited kube-scheduler 0 b59d9640c4b37 kube-scheduler-functional-180941 kube-system
5fe6627431dbf a3e246e9556e9 About a minute ago Exited etcd 0 6e2f053cb8397 etcd-functional-180941 kube-system
==> containerd <==
Dec 19 02:33:52 functional-180941 containerd[3816]: time="2025-12-19T02:33:52.023231603Z" level=info msg="connecting to shim 5e345c165d4a4d17aeace7fc8bb10cffc748ada9185e633e4f5a955b3ee422f1" address="unix:///run/containerd/s/0c7484f46bb46c91f5b5b3c64c93ccf9896024852efff89965fe295f91156fbd" protocol=ttrpc version=3
Dec 19 02:33:52 functional-180941 containerd[3816]: time="2025-12-19T02:33:52.099134821Z" level=info msg="StartContainer for \"5e345c165d4a4d17aeace7fc8bb10cffc748ada9185e633e4f5a955b3ee422f1\" returns successfully"
Dec 19 02:33:52 functional-180941 containerd[3816]: time="2025-12-19T02:33:52.988068216Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-pod9b003897_48f4_4bfb_862f_b38aa4ba1260.slice/cri-containerd-9026560daa2d2d55434b5762a61b6fd06b3a43ed6057f7dea0b212358df2fff6.scope/hugetlb.2MB.events\""
Dec 19 02:33:52 functional-180941 containerd[3816]: time="2025-12-19T02:33:52.988231829Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-pod9b003897_48f4_4bfb_862f_b38aa4ba1260.slice/cri-containerd-9026560daa2d2d55434b5762a61b6fd06b3a43ed6057f7dea0b212358df2fff6.scope/hugetlb.1GB.events\""
Dec 19 02:33:52 functional-180941 containerd[3816]: time="2025-12-19T02:33:52.989195035Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode72bfb67_8e41_4d33_a35e_9300851b3b09.slice/cri-containerd-d333d965ea2cd406df4dbfc0a3009ea6fc19b4e00e9486ab18dd8d5480fc5629.scope/hugetlb.2MB.events\""
Dec 19 02:33:52 functional-180941 containerd[3816]: time="2025-12-19T02:33:52.989309746Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode72bfb67_8e41_4d33_a35e_9300851b3b09.slice/cri-containerd-d333d965ea2cd406df4dbfc0a3009ea6fc19b4e00e9486ab18dd8d5480fc5629.scope/hugetlb.1GB.events\""
Dec 19 02:33:52 functional-180941 containerd[3816]: time="2025-12-19T02:33:52.990122090Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd0f286b_83c9_47db_8ca8_262541006327.slice/cri-containerd-d9d9e1018c454b4fb842edfd94ce775aaf6efcbf99fd0057312823735f2de248.scope/hugetlb.2MB.events\""
Dec 19 02:33:52 functional-180941 containerd[3816]: time="2025-12-19T02:33:52.990230320Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd0f286b_83c9_47db_8ca8_262541006327.slice/cri-containerd-d9d9e1018c454b4fb842edfd94ce775aaf6efcbf99fd0057312823735f2de248.scope/hugetlb.1GB.events\""
Dec 19 02:33:52 functional-180941 containerd[3816]: time="2025-12-19T02:33:52.991088318Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode867e906_a63a_422b_beae_6e1d81b5654a.slice/cri-containerd-3d1f591b08793b5957d08a468189d3db4ab4eaedd80f3caadc21cc563081e0bc.scope/hugetlb.2MB.events\""
Dec 19 02:33:52 functional-180941 containerd[3816]: time="2025-12-19T02:33:52.991209152Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode867e906_a63a_422b_beae_6e1d81b5654a.slice/cri-containerd-3d1f591b08793b5957d08a468189d3db4ab4eaedd80f3caadc21cc563081e0bc.scope/hugetlb.1GB.events\""
Dec 19 02:33:52 functional-180941 containerd[3816]: time="2025-12-19T02:33:52.992125176Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8de06074c586be0533d71f8f54d7e57c.slice/cri-containerd-14ac2a6e3997412d1b684bbf8fc1a50a3c066b8c6e95fb58ef19669045c85531.scope/hugetlb.2MB.events\""
Dec 19 02:33:52 functional-180941 containerd[3816]: time="2025-12-19T02:33:52.992255989Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8de06074c586be0533d71f8f54d7e57c.slice/cri-containerd-14ac2a6e3997412d1b684bbf8fc1a50a3c066b8c6e95fb58ef19669045c85531.scope/hugetlb.1GB.events\""
Dec 19 02:33:52 functional-180941 containerd[3816]: time="2025-12-19T02:33:52.993169099Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34d10805f753972429fa4ab31a638d54.slice/cri-containerd-b39cd9d618557e29380ddd4c5b661a409cad10586915d650a33d1d82d44c9ade.scope/hugetlb.2MB.events\""
Dec 19 02:33:52 functional-180941 containerd[3816]: time="2025-12-19T02:33:52.993287582Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34d10805f753972429fa4ab31a638d54.slice/cri-containerd-b39cd9d618557e29380ddd4c5b661a409cad10586915d650a33d1d82d44c9ade.scope/hugetlb.1GB.events\""
Dec 19 02:33:52 functional-180941 containerd[3816]: time="2025-12-19T02:33:52.993995411Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod513d90b1_bb58_4604_96fc_4a7ff1689c05.slice/cri-containerd-e5a930fb2ef9c180226083661a45affe2306b4356ba63eae9877ce0405b085ef.scope/hugetlb.2MB.events\""
Dec 19 02:33:52 functional-180941 containerd[3816]: time="2025-12-19T02:33:52.994097175Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod513d90b1_bb58_4604_96fc_4a7ff1689c05.slice/cri-containerd-e5a930fb2ef9c180226083661a45affe2306b4356ba63eae9877ce0405b085ef.scope/hugetlb.1GB.events\""
Dec 19 02:33:52 functional-180941 containerd[3816]: time="2025-12-19T02:33:52.995058563Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36fe9fcf_a85d_4ccf_aac4_cf15ef4ec85f.slice/cri-containerd-5e345c165d4a4d17aeace7fc8bb10cffc748ada9185e633e4f5a955b3ee422f1.scope/hugetlb.2MB.events\""
Dec 19 02:33:52 functional-180941 containerd[3816]: time="2025-12-19T02:33:52.995285986Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36fe9fcf_a85d_4ccf_aac4_cf15ef4ec85f.slice/cri-containerd-5e345c165d4a4d17aeace7fc8bb10cffc748ada9185e633e4f5a955b3ee422f1.scope/hugetlb.1GB.events\""
Dec 19 02:33:52 functional-180941 containerd[3816]: time="2025-12-19T02:33:52.996926884Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podada616becfcef6d3ba653f5e99f70f60.slice/cri-containerd-978c2213695574722ba0df876a653ef754fa77c949290ea538ee2e4e2020c375.scope/hugetlb.2MB.events\""
Dec 19 02:33:52 functional-180941 containerd[3816]: time="2025-12-19T02:33:52.997044815Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podada616becfcef6d3ba653f5e99f70f60.slice/cri-containerd-978c2213695574722ba0df876a653ef754fa77c949290ea538ee2e4e2020c375.scope/hugetlb.1GB.events\""
Dec 19 02:33:52 functional-180941 containerd[3816]: time="2025-12-19T02:33:52.997908577Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3531edfb0a338dd5e68dd12ad922cfe3.slice/cri-containerd-73020ed8c20ec15aefab8b3b5f3654c45ab1f831a3da9d52092d960bd67f098f.scope/hugetlb.2MB.events\""
Dec 19 02:33:52 functional-180941 containerd[3816]: time="2025-12-19T02:33:52.998021730Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3531edfb0a338dd5e68dd12ad922cfe3.slice/cri-containerd-73020ed8c20ec15aefab8b3b5f3654c45ab1f831a3da9d52092d960bd67f098f.scope/hugetlb.1GB.events\""
Dec 19 02:33:53 functional-180941 containerd[3816]: time="2025-12-19T02:33:53.474994730Z" level=info msg="RunPodSandbox for name:\"nginx-svc\" uid:\"e77ecfa4-b7ec-4449-9875-5ff766083fa4\" namespace:\"default\""
Dec 19 02:33:53 functional-180941 containerd[3816]: time="2025-12-19T02:33:53.509321659Z" level=info msg="connecting to shim b30f7a7c7edfde844f26e412444e4b0cdb4ff9ccffbefde976cae0c09bd20cce" address="unix:///run/containerd/s/8e6e02bc9f77de8cf3f6168c8feccb155ca9e56f4bf65c39f55e8d399c27757d" namespace=k8s.io protocol=ttrpc version=3
Dec 19 02:33:53 functional-180941 containerd[3816]: time="2025-12-19T02:33:53.586561991Z" level=info msg="RunPodSandbox for name:\"nginx-svc\" uid:\"e77ecfa4-b7ec-4449-9875-5ff766083fa4\" namespace:\"default\" returns sandbox id \"b30f7a7c7edfde844f26e412444e4b0cdb4ff9ccffbefde976cae0c09bd20cce\""
==> coredns [009e96b5d79eda655f895c8bbc5db174f1320debd7bb79bbd76b2d3cf491505f] <==
maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
.:53
[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
CoreDNS-1.12.1
linux/amd64, go1.24.1, 707c7c1
[INFO] 127.0.0.1:46335 - 2289 "HINFO IN 4435456827762028857.6877573451654151863. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.02952319s
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
==> coredns [d9d9e1018c454b4fb842edfd94ce775aaf6efcbf99fd0057312823735f2de248] <==
maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
.:53
[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
CoreDNS-1.12.1
linux/amd64, go1.24.1, 707c7c1
[INFO] 127.0.0.1:55503 - 485 "HINFO IN 961960786048569969.2083331202308493596. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.036526094s
==> describe nodes <==
Name: functional-180941
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=functional-180941
kubernetes.io/os=linux
minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
minikube.k8s.io/name=functional-180941
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_12_19T02_32_18_0700
minikube.k8s.io/version=v1.37.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Fri, 19 Dec 2025 02:32:14 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: functional-180941
AcquireTime: <unset>
RenewTime: Fri, 19 Dec 2025 02:33:46 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Fri, 19 Dec 2025 02:33:46 +0000 Fri, 19 Dec 2025 02:32:12 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Fri, 19 Dec 2025 02:33:46 +0000 Fri, 19 Dec 2025 02:32:12 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Fri, 19 Dec 2025 02:33:46 +0000 Fri, 19 Dec 2025 02:32:12 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Fri, 19 Dec 2025 02:33:46 +0000 Fri, 19 Dec 2025 02:32:36 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.49.2
Hostname: functional-180941
Capacity:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32863352Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32863352Ki
pods: 110
System Info:
Machine ID: 99cc213c06a11cdf07b2a4d26942818a
System UUID: 113dc0ee-fc8e-4afa-94c5-3ed969a2cce9
Boot ID: a0dec9bb-d63c-4dc5-9036-bbcaf9f2c6be
Kernel Version: 6.8.0-1045-gcp
OS Image: Debian GNU/Linux 12 (bookworm)
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://2.2.0
Kubelet Version: v1.34.3
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (16 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default hello-node-75c85bcc94-4z4zz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 16s
default hello-node-connect-7d85dfc575-cmmxr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 0s
default nginx-svc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2s
kube-system coredns-66bc5c9577-wzv8l 100m (1%) 0 (0%) 70Mi (0%) 170Mi (0%) 93s
kube-system etcd-functional-180941 100m (1%) 0 (0%) 100Mi (0%) 0 (0%) 98s
kube-system kindnet-xh25x 100m (1%) 100m (1%) 50Mi (0%) 50Mi (0%) 93s
kube-system kube-apiserver-functional-180941 250m (3%) 0 (0%) 0 (0%) 0 (0%) 40s
kube-system kube-controller-manager-functional-180941 200m (2%) 0 (0%) 0 (0%) 0 (0%) 98s
kube-system kube-proxy-j855q 0 (0%) 0 (0%) 0 (0%) 0 (0%) 93s
kube-system kube-scheduler-functional-180941 100m (1%) 0 (0%) 0 (0%) 0 (0%) 98s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 92s
kubernetes-dashboard kubernetes-dashboard-api-c5cfcc8f7-d8jxv 100m (1%) 250m (3%) 200Mi (0%) 400Mi (1%) 9s
kubernetes-dashboard kubernetes-dashboard-auth-57b44fc5d4-glcd2 100m (1%) 250m (3%) 200Mi (0%) 400Mi (1%) 9s
kubernetes-dashboard kubernetes-dashboard-kong-9849c64bd-m8mlk 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9s
kubernetes-dashboard kubernetes-dashboard-metrics-scraper-7685fd8b77-8w988 100m (1%) 250m (3%) 200Mi (0%) 400Mi (1%) 9s
kubernetes-dashboard kubernetes-dashboard-web-5c9f966b98-htfj4 100m (1%) 250m (3%) 200Mi (0%) 400Mi (1%) 9s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 1250m (15%) 1100m (13%)
memory 1020Mi (3%) 1820Mi (5%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 91s kube-proxy
Normal Starting 37s kube-proxy
Normal NodeHasSufficientPID 98s kubelet Node functional-180941 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 98s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 98s kubelet Node functional-180941 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 98s kubelet Node functional-180941 status is now: NodeHasNoDiskPressure
Normal Starting 98s kubelet Starting kubelet.
Normal RegisteredNode 94s node-controller Node functional-180941 event: Registered Node functional-180941 in Controller
Normal NodeReady 79s kubelet Node functional-180941 status is now: NodeReady
Normal Starting 43s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 43s (x8 over 43s) kubelet Node functional-180941 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 43s (x8 over 43s) kubelet Node functional-180941 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 43s (x7 over 43s) kubelet Node functional-180941 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 43s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 37s node-controller Node functional-180941 event: Registered Node functional-180941 in Controller
==> dmesg <==
[Dec19 01:17] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
[ +0.001886] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
[ +0.085011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
[ +0.395482] i8042: Warning: Keylock active
[ +0.012710] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
[ +0.497460] block sda: the capability attribute has been deprecated.
[ +0.080392] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
[ +0.020963] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
[ +5.499240] kauditd_printk_skb: 47 callbacks suppressed
==> etcd [5fe6627431dbf2438de339cd3c2f774be8680c2c817b42ff82b17eb12ce2d65b] <==
{"level":"warn","ts":"2025-12-19T02:32:14.151098Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56322","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T02:32:14.158003Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56352","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T02:32:14.166559Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56356","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T02:32:14.178756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56376","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T02:32:14.185878Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56392","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T02:32:14.193063Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56406","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T02:32:14.250864Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56418","server-name":"","error":"EOF"}
{"level":"info","ts":"2025-12-19T02:33:11.604834Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2025-12-19T02:33:11.604940Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-180941","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
{"level":"error","ts":"2025-12-19T02:33:11.605065Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
{"level":"error","ts":"2025-12-19T02:33:11.606630Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
{"level":"error","ts":"2025-12-19T02:33:11.606712Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"info","ts":"2025-12-19T02:33:11.606751Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
{"level":"info","ts":"2025-12-19T02:33:11.606798Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
{"level":"warn","ts":"2025-12-19T02:33:11.606828Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
{"level":"warn","ts":"2025-12-19T02:33:11.606871Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"warn","ts":"2025-12-19T02:33:11.606895Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
{"level":"warn","ts":"2025-12-19T02:33:11.606902Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"error","ts":"2025-12-19T02:33:11.606908Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"error","ts":"2025-12-19T02:33:11.606912Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"info","ts":"2025-12-19T02:33:11.606858Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
{"level":"info","ts":"2025-12-19T02:33:11.608378Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
{"level":"error","ts":"2025-12-19T02:33:11.608435Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"info","ts":"2025-12-19T02:33:11.608465Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2025-12-19T02:33:11.608474Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-180941","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
==> etcd [978c2213695574722ba0df876a653ef754fa77c949290ea538ee2e4e2020c375] <==
{"level":"warn","ts":"2025-12-19T02:33:14.625000Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46842","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T02:33:14.633685Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46862","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T02:33:14.640444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46884","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T02:33:14.655216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46912","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T02:33:14.662012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46942","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T02:33:14.669786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46958","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T02:33:14.676874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46968","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T02:33:14.684553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46988","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T02:33:14.691653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47006","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T02:33:14.700155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47022","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T02:33:14.707182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47038","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T02:33:14.714730Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47046","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T02:33:14.731630Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47050","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T02:33:14.738214Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47064","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T02:33:14.744739Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47080","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T02:33:14.792248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47102","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T02:33:48.657194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60558","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T02:33:48.670015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60566","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T02:33:48.679943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60588","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T02:33:48.716088Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60606","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T02:33:48.722948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60626","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T02:33:48.733347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60646","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T02:33:48.751324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60654","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T02:33:48.762683Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60674","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-12-19T02:33:48.802509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60696","server-name":"","error":"EOF"}
==> kernel <==
02:33:55 up 1:16, 0 user, load average: 1.62, 13.61, 55.28
Linux functional-180941 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
==> kindnet [8eded8e1cc2ded02f545d869951b55d5921f8ca55217b36f3ea6b1112c3fd2f1] <==
I1219 02:32:25.988530 1 main.go:109] connected to apiserver: https://10.96.0.1:443
I1219 02:32:25.988848 1 main.go:139] hostIP = 192.168.49.2
podIP = 192.168.49.2
I1219 02:32:25.988981 1 main.go:148] setting mtu 1500 for CNI
I1219 02:32:25.988997 1 main.go:178] kindnetd IP family: "ipv4"
I1219 02:32:25.989030 1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
time="2025-12-19T02:32:26Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
I1219 02:32:26.236045 1 controller.go:377] "Starting controller" name="kube-network-policies"
I1219 02:32:26.236075 1 controller.go:381] "Waiting for informer caches to sync"
I1219 02:32:26.236087 1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
I1219 02:32:26.236230 1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
I1219 02:32:26.636205 1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
I1219 02:32:26.636241 1 metrics.go:72] Registering metrics
I1219 02:32:26.636310 1 controller.go:711] "Syncing nftables rules"
I1219 02:32:36.236766 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1219 02:32:36.236834 1 main.go:301] handling current node
I1219 02:32:46.236226 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1219 02:32:46.236279 1 main.go:301] handling current node
I1219 02:32:56.236370 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1219 02:32:56.236409 1 main.go:301] handling current node
==> kindnet [9026560daa2d2d55434b5762a61b6fd06b3a43ed6057f7dea0b212358df2fff6] <==
E1219 02:33:01.986953 1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
E1219 02:33:01.987169 1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
E1219 02:33:02.814881 1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
E1219 02:33:02.831912 1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
E1219 02:33:03.234880 1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
E1219 02:33:03.280866 1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
E1219 02:33:05.817768 1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
E1219 02:33:05.831653 1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
E1219 02:33:05.930770 1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
E1219 02:33:06.422982 1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
E1219 02:33:10.438335 1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
E1219 02:33:10.765918 1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
E1219 02:33:11.416505 1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
E1219 02:33:11.710208 1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
I1219 02:33:21.890870 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1219 02:33:21.890956 1 main.go:301] handling current node
I1219 02:33:24.191701 1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
I1219 02:33:24.191731 1 metrics.go:72] Registering metrics
I1219 02:33:24.191811 1 controller.go:711] "Syncing nftables rules"
I1219 02:33:31.890727 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1219 02:33:31.890769 1 main.go:301] handling current node
I1219 02:33:41.890894 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1219 02:33:41.890941 1 main.go:301] handling current node
I1219 02:33:51.890911 1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
I1219 02:33:51.891041 1 main.go:301] handling current node
==> kube-apiserver [14ac2a6e3997412d1b684bbf8fc1a50a3c066b8c6e95fb58ef19669045c85531] <==
I1219 02:33:44.229221 1 handler.go:285] Adding GroupVersion configuration.konghq.com v1alpha1 to ResourceManager
I1219 02:33:44.236571 1 handler.go:285] Adding GroupVersion configuration.konghq.com v1alpha1 to ResourceManager
I1219 02:33:44.243509 1 handler.go:285] Adding GroupVersion configuration.konghq.com v1 to ResourceManager
I1219 02:33:44.260035 1 handler.go:285] Adding GroupVersion configuration.konghq.com v1beta1 to ResourceManager
I1219 02:33:44.268026 1 handler.go:285] Adding GroupVersion configuration.konghq.com v1beta1 to ResourceManager
I1219 02:33:46.639431 1 controller.go:667] quota admission added evaluator for: namespaces
I1219 02:33:46.686600 1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-kong-proxy" clusterIPs={"IPv4":"10.110.21.49"}
I1219 02:33:46.689680 1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-api" clusterIPs={"IPv4":"10.109.40.251"}
I1219 02:33:46.694860 1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-auth" clusterIPs={"IPv4":"10.96.19.203"}
I1219 02:33:46.697189 1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.11.5"}
I1219 02:33:46.701691 1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-web" clusterIPs={"IPv4":"10.98.156.91"}
W1219 02:33:48.645680 1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
W1219 02:33:48.657041 1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
W1219 02:33:48.669978 1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
W1219 02:33:48.679889 1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
W1219 02:33:48.706699 1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
W1219 02:33:48.722841 1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
W1219 02:33:48.733290 1 logging.go:55] [core] [Channel #283 SubChannel #284]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
W1219 02:33:48.751242 1 logging.go:55] [core] [Channel #287 SubChannel #288]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
W1219 02:33:48.762665 1 logging.go:55] [core] [Channel #291 SubChannel #292]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
W1219 02:33:48.775206 1 logging.go:55] [core] [Channel #295 SubChannel #296]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
W1219 02:33:48.792632 1 logging.go:55] [core] [Channel #299 SubChannel #300]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
W1219 02:33:48.802411 1 logging.go:55] [core] [Channel #303 SubChannel #304]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
I1219 02:33:53.165796 1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.107.116.109"}
I1219 02:33:55.192190 1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.104.100.243"}
==> kube-controller-manager [b39cd9d618557e29380ddd4c5b661a409cad10586915d650a33d1d82d44c9ade] <==
I1219 02:33:18.625590 1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
I1219 02:33:18.626221 1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
I1219 02:33:18.626848 1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
I1219 02:33:18.627200 1 shared_informer.go:356] "Caches are synced" controller="endpoint"
I1219 02:33:18.627478 1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
I1219 02:33:18.629346 1 shared_informer.go:356] "Caches are synced" controller="resource quota"
I1219 02:33:18.629997 1 shared_informer.go:356] "Caches are synced" controller="namespace"
I1219 02:33:18.630096 1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
I1219 02:33:18.633733 1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
I1219 02:33:18.637003 1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
I1219 02:33:18.639282 1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
I1219 02:33:18.639293 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
I1219 02:33:48.634202 1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="kongingresses.configuration.konghq.com"
I1219 02:33:48.634257 1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="kongupstreampolicies.configuration.konghq.com"
I1219 02:33:48.634290 1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="udpingresses.configuration.konghq.com"
I1219 02:33:48.634320 1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="tcpingresses.configuration.konghq.com"
I1219 02:33:48.634354 1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="kongplugins.configuration.konghq.com"
I1219 02:33:48.634391 1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingressclassparameterses.configuration.konghq.com"
I1219 02:33:48.634447 1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="kongconsumers.configuration.konghq.com"
I1219 02:33:48.634489 1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="kongconsumergroups.configuration.konghq.com"
I1219 02:33:48.634531 1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="kongcustomentities.configuration.konghq.com"
I1219 02:33:48.634650 1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
I1219 02:33:48.648719 1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
I1219 02:33:49.835186 1 shared_informer.go:356] "Caches are synced" controller="resource quota"
I1219 02:33:49.849392 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
==> kube-controller-manager [e7f1ab2e451e8d1832587d91da1a66105cf4f700f68c5bbdb245d0ce51ecacaa] <==
I1219 02:33:03.054880 1 serving.go:386] Generated self-signed cert in-memory
I1219 02:33:03.434988 1 controllermanager.go:191] "Starting" version="v1.34.3"
I1219 02:33:03.435012 1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1219 02:33:03.436533 1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I1219 02:33:03.436542 1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I1219 02:33:03.436936 1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
I1219 02:33:03.437057 1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
E1219 02:33:13.438558 1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
==> kube-proxy [296dd104376b46a04a176ca9b4b06cd8abeba8ff9549f9aa36379e1e289910c0] <==
I1219 02:32:23.486269 1 server_linux.go:53] "Using iptables proxy"
I1219 02:32:23.549556 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I1219 02:32:23.650660 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1219 02:32:23.650718 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
E1219 02:32:23.650847 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1219 02:32:23.672863 1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I1219 02:32:23.672935 1 server_linux.go:132] "Using iptables Proxier"
I1219 02:32:23.678176 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1219 02:32:23.678611 1 server.go:527] "Version info" version="v1.34.3"
I1219 02:32:23.678651 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1219 02:32:23.680736 1 config.go:200] "Starting service config controller"
I1219 02:32:23.680759 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1219 02:32:23.680773 1 config.go:309] "Starting node config controller"
I1219 02:32:23.680778 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1219 02:32:23.680786 1 shared_informer.go:356] "Caches are synced" controller="node config"
I1219 02:32:23.680800 1 config.go:403] "Starting serviceCIDR config controller"
I1219 02:32:23.680806 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1219 02:32:23.680826 1 config.go:106] "Starting endpoint slice config controller"
I1219 02:32:23.680878 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1219 02:32:23.781924 1 shared_informer.go:356] "Caches are synced" controller="service config"
I1219 02:32:23.781942 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
I1219 02:32:23.781966 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
==> kube-proxy [d333d965ea2cd406df4dbfc0a3009ea6fc19b4e00e9486ab18dd8d5480fc5629] <==
I1219 02:33:02.533044 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
E1219 02:33:02.534219 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-180941&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1219 02:33:03.392134 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-180941&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1219 02:33:06.289967 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-180941&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1219 02:33:10.699791 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-180941&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
I1219 02:33:18.133252 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1219 02:33:18.133291 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
E1219 02:33:18.133368 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1219 02:33:18.155887 1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I1219 02:33:18.155974 1 server_linux.go:132] "Using iptables Proxier"
I1219 02:33:18.162246 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1219 02:33:18.162672 1 server.go:527] "Version info" version="v1.34.3"
I1219 02:33:18.162705 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1219 02:33:18.164047 1 config.go:106] "Starting endpoint slice config controller"
I1219 02:33:18.164079 1 config.go:200] "Starting service config controller"
I1219 02:33:18.164095 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1219 02:33:18.164088 1 config.go:403] "Starting serviceCIDR config controller"
I1219 02:33:18.164114 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1219 02:33:18.164113 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1219 02:33:18.164147 1 config.go:309] "Starting node config controller"
I1219 02:33:18.164155 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1219 02:33:18.264302 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
I1219 02:33:18.264326 1 shared_informer.go:356] "Caches are synced" controller="service config"
I1219 02:33:18.264363 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
I1219 02:33:18.264380 1 shared_informer.go:356] "Caches are synced" controller="node config"
==> kube-scheduler [73020ed8c20ec15aefab8b3b5f3654c45ab1f831a3da9d52092d960bd67f098f] <==
E1219 02:33:07.464067 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E1219 02:33:07.567061 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
E1219 02:33:07.638842 1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
E1219 02:33:07.717843 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
E1219 02:33:08.102921 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1219 02:33:10.137353 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
E1219 02:33:10.619167 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
E1219 02:33:10.707146 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
E1219 02:33:10.781682 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1219 02:33:10.863688 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
E1219 02:33:11.244900 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
E1219 02:33:11.343646 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1219 02:33:11.596275 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
E1219 02:33:11.775282 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
E1219 02:33:12.403636 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1219 02:33:12.434362 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1219 02:33:12.468170 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1219 02:33:12.479593 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1219 02:33:12.588547 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1219 02:33:12.745806 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E1219 02:33:13.509021 1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
E1219 02:33:15.188848 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
E1219 02:33:15.202705 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
E1219 02:33:15.202825 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
I1219 02:33:27.503401 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
==> kube-scheduler [c64dcbfa4c96d0561fc7d99addb1ea41361024efacc0f7dcae1170ed6e33d2f3] <==
E1219 02:32:14.945776 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1219 02:32:14.945797 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
E1219 02:32:14.945872 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E1219 02:32:14.945954 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1219 02:32:14.945993 1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
E1219 02:32:14.946011 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1219 02:32:14.946050 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
E1219 02:32:14.946082 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
E1219 02:32:14.946108 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1219 02:32:14.946161 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
E1219 02:32:14.946158 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1219 02:32:14.946314 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
E1219 02:32:14.946688 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
E1219 02:32:15.782945 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1219 02:32:15.794772 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1219 02:32:15.812099 1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
E1219 02:32:15.817010 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1219 02:32:15.823045 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
I1219 02:32:16.543347 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1219 02:33:01.456664 1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1219 02:33:01.456756 1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
I1219 02:33:01.456709 1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
I1219 02:33:01.457000 1 server.go:263] "[graceful-termination] secure server has stopped listening"
I1219 02:33:01.457018 1 server.go:265] "[graceful-termination] secure server is exiting"
E1219 02:33:01.457043 1 run.go:72] "command failed" err="finished without leader elect"
==> kubelet <==
Dec 19 02:33:41 functional-180941 kubelet[4795]: I1219 02:33:41.966730 4795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7rm6\" (UniqueName: \"kubernetes.io/projected/e428a017-930f-45d6-91a4-41044c3cd159-kube-api-access-v7rm6\") pod \"busybox-mount\" (UID: \"e428a017-930f-45d6-91a4-41044c3cd159\") " pod="default/busybox-mount"
Dec 19 02:33:45 functional-180941 kubelet[4795]: I1219 02:33:45.017506 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-node-75c85bcc94-4z4zz" podStartSLOduration=3.935733318 podStartE2EDuration="6.017481771s" podCreationTimestamp="2025-12-19 02:33:39 +0000 UTC" firstStartedPulling="2025-12-19 02:33:39.551338867 +0000 UTC m=+26.762353014" lastFinishedPulling="2025-12-19 02:33:41.633087322 +0000 UTC m=+28.844101467" observedRunningTime="2025-12-19 02:33:42.008029654 +0000 UTC m=+29.219043832" watchObservedRunningTime="2025-12-19 02:33:45.017481771 +0000 UTC m=+32.228495931"
Dec 19 02:33:46 functional-180941 kubelet[4795]: I1219 02:33:46.193401 4795 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7rm6\" (UniqueName: \"kubernetes.io/projected/e428a017-930f-45d6-91a4-41044c3cd159-kube-api-access-v7rm6\") pod \"e428a017-930f-45d6-91a4-41044c3cd159\" (UID: \"e428a017-930f-45d6-91a4-41044c3cd159\") "
Dec 19 02:33:46 functional-180941 kubelet[4795]: I1219 02:33:46.193468 4795 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/e428a017-930f-45d6-91a4-41044c3cd159-test-volume\") pod \"e428a017-930f-45d6-91a4-41044c3cd159\" (UID: \"e428a017-930f-45d6-91a4-41044c3cd159\") "
Dec 19 02:33:46 functional-180941 kubelet[4795]: I1219 02:33:46.193574 4795 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e428a017-930f-45d6-91a4-41044c3cd159-test-volume" (OuterVolumeSpecName: "test-volume") pod "e428a017-930f-45d6-91a4-41044c3cd159" (UID: "e428a017-930f-45d6-91a4-41044c3cd159"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
Dec 19 02:33:46 functional-180941 kubelet[4795]: I1219 02:33:46.196186 4795 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e428a017-930f-45d6-91a4-41044c3cd159-kube-api-access-v7rm6" (OuterVolumeSpecName: "kube-api-access-v7rm6") pod "e428a017-930f-45d6-91a4-41044c3cd159" (UID: "e428a017-930f-45d6-91a4-41044c3cd159"). InnerVolumeSpecName "kube-api-access-v7rm6". PluginName "kubernetes.io/projected", VolumeGIDValue ""
Dec 19 02:33:46 functional-180941 kubelet[4795]: I1219 02:33:46.294115 4795 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v7rm6\" (UniqueName: \"kubernetes.io/projected/e428a017-930f-45d6-91a4-41044c3cd159-kube-api-access-v7rm6\") on node \"functional-180941\" DevicePath \"\""
Dec 19 02:33:46 functional-180941 kubelet[4795]: I1219 02:33:46.294160 4795 reconciler_common.go:299] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/e428a017-930f-45d6-91a4-41044c3cd159-test-volume\") on node \"functional-180941\" DevicePath \"\""
Dec 19 02:33:46 functional-180941 kubelet[4795]: I1219 02:33:46.898983 4795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubernetes-dashboard-kong-prefix-dir\" (UniqueName: \"kubernetes.io/empty-dir/1e3265b8-b920-4d8d-88ce-d1a5577ec92a-kubernetes-dashboard-kong-prefix-dir\") pod \"kubernetes-dashboard-kong-9849c64bd-m8mlk\" (UID: \"1e3265b8-b920-4d8d-88ce-d1a5577ec92a\") " pod="kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-m8mlk"
Dec 19 02:33:46 functional-180941 kubelet[4795]: I1219 02:33:46.899070 4795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubernetes-dashboard-kong-tmp\" (UniqueName: \"kubernetes.io/empty-dir/1e3265b8-b920-4d8d-88ce-d1a5577ec92a-kubernetes-dashboard-kong-tmp\") pod \"kubernetes-dashboard-kong-9849c64bd-m8mlk\" (UID: \"1e3265b8-b920-4d8d-88ce-d1a5577ec92a\") " pod="kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-m8mlk"
Dec 19 02:33:46 functional-180941 kubelet[4795]: I1219 02:33:46.899154 4795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d85d\" (UniqueName: \"kubernetes.io/projected/f58425c1-30d8-451d-95e9-e5baa914d266-kube-api-access-4d85d\") pod \"kubernetes-dashboard-metrics-scraper-7685fd8b77-8w988\" (UID: \"f58425c1-30d8-451d-95e9-e5baa914d266\") " pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-7685fd8b77-8w988"
Dec 19 02:33:46 functional-180941 kubelet[4795]: I1219 02:33:46.899197 4795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/67d78b53-5a1c-42b7-8a19-5afbac88f345-tmp-volume\") pod \"kubernetes-dashboard-api-c5cfcc8f7-d8jxv\" (UID: \"67d78b53-5a1c-42b7-8a19-5afbac88f345\") " pod="kubernetes-dashboard/kubernetes-dashboard-api-c5cfcc8f7-d8jxv"
Dec 19 02:33:46 functional-180941 kubelet[4795]: I1219 02:33:46.899225 4795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/98f6c7f8-cead-462c-8433-d56e7b95a6bf-tmp-volume\") pod \"kubernetes-dashboard-auth-57b44fc5d4-glcd2\" (UID: \"98f6c7f8-cead-462c-8433-d56e7b95a6bf\") " pod="kubernetes-dashboard/kubernetes-dashboard-auth-57b44fc5d4-glcd2"
Dec 19 02:33:46 functional-180941 kubelet[4795]: I1219 02:33:46.899242 4795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjbw9\" (UniqueName: \"kubernetes.io/projected/98f6c7f8-cead-462c-8433-d56e7b95a6bf-kube-api-access-pjbw9\") pod \"kubernetes-dashboard-auth-57b44fc5d4-glcd2\" (UID: \"98f6c7f8-cead-462c-8433-d56e7b95a6bf\") " pod="kubernetes-dashboard/kubernetes-dashboard-auth-57b44fc5d4-glcd2"
Dec 19 02:33:46 functional-180941 kubelet[4795]: I1219 02:33:46.899256 4795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrt59\" (UniqueName: \"kubernetes.io/projected/36fe9fcf-a85d-4ccf-aac4-cf15ef4ec85f-kube-api-access-mrt59\") pod \"kubernetes-dashboard-web-5c9f966b98-htfj4\" (UID: \"36fe9fcf-a85d-4ccf-aac4-cf15ef4ec85f\") " pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-htfj4"
Dec 19 02:33:46 functional-180941 kubelet[4795]: I1219 02:33:46.899271 4795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kong-custom-dbless-config-volume\" (UniqueName: \"kubernetes.io/configmap/1e3265b8-b920-4d8d-88ce-d1a5577ec92a-kong-custom-dbless-config-volume\") pod \"kubernetes-dashboard-kong-9849c64bd-m8mlk\" (UID: \"1e3265b8-b920-4d8d-88ce-d1a5577ec92a\") " pod="kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-m8mlk"
Dec 19 02:33:46 functional-180941 kubelet[4795]: I1219 02:33:46.899287 4795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9dsz\" (UniqueName: \"kubernetes.io/projected/67d78b53-5a1c-42b7-8a19-5afbac88f345-kube-api-access-d9dsz\") pod \"kubernetes-dashboard-api-c5cfcc8f7-d8jxv\" (UID: \"67d78b53-5a1c-42b7-8a19-5afbac88f345\") " pod="kubernetes-dashboard/kubernetes-dashboard-api-c5cfcc8f7-d8jxv"
Dec 19 02:33:46 functional-180941 kubelet[4795]: I1219 02:33:46.899300 4795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/36fe9fcf-a85d-4ccf-aac4-cf15ef4ec85f-tmp-volume\") pod \"kubernetes-dashboard-web-5c9f966b98-htfj4\" (UID: \"36fe9fcf-a85d-4ccf-aac4-cf15ef4ec85f\") " pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-htfj4"
Dec 19 02:33:46 functional-180941 kubelet[4795]: I1219 02:33:46.899348 4795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f58425c1-30d8-451d-95e9-e5baa914d266-tmp-volume\") pod \"kubernetes-dashboard-metrics-scraper-7685fd8b77-8w988\" (UID: \"f58425c1-30d8-451d-95e9-e5baa914d266\") " pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-7685fd8b77-8w988"
Dec 19 02:33:47 functional-180941 kubelet[4795]: I1219 02:33:47.015232 4795 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="452388232e3cdb22d0e0a662cfc2478bcede2e29c5a8aa6d7cdd9ede97b8b9e7"
Dec 19 02:33:51 functional-180941 kubelet[4795]: I1219 02:33:51.990999 4795 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863352Ki","pods":"110"}
Dec 19 02:33:51 functional-180941 kubelet[4795]: I1219 02:33:51.991101 4795 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863352Ki","pods":"110"}
Dec 19 02:33:53 functional-180941 kubelet[4795]: I1219 02:33:53.159063 4795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-htfj4" podStartSLOduration=2.38067891 podStartE2EDuration="7.159036419s" podCreationTimestamp="2025-12-19 02:33:46 +0000 UTC" firstStartedPulling="2025-12-19 02:33:47.212389566 +0000 UTC m=+34.423403716" lastFinishedPulling="2025-12-19 02:33:51.990747076 +0000 UTC m=+39.201761225" observedRunningTime="2025-12-19 02:33:53.063647425 +0000 UTC m=+40.274661599" watchObservedRunningTime="2025-12-19 02:33:53.159036419 +0000 UTC m=+40.370050576"
Dec 19 02:33:53 functional-180941 kubelet[4795]: I1219 02:33:53.242818 4795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69dzt\" (UniqueName: \"kubernetes.io/projected/e77ecfa4-b7ec-4449-9875-5ff766083fa4-kube-api-access-69dzt\") pod \"nginx-svc\" (UID: \"e77ecfa4-b7ec-4449-9875-5ff766083fa4\") " pod="default/nginx-svc"
Dec 19 02:33:55 functional-180941 kubelet[4795]: I1219 02:33:55.257016 4795 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv7gz\" (UniqueName: \"kubernetes.io/projected/f14500b1-7be9-4e64-ba7b-b824b9faef10-kube-api-access-sv7gz\") pod \"hello-node-connect-7d85dfc575-cmmxr\" (UID: \"f14500b1-7be9-4e64-ba7b-b824b9faef10\") " pod="default/hello-node-connect-7d85dfc575-cmmxr"
==> kubernetes-dashboard [5e345c165d4a4d17aeace7fc8bb10cffc748ada9185e633e4f5a955b3ee422f1] <==
I1219 02:33:52.175785 1 main.go:37] "Starting Kubernetes Dashboard Web" version="1.7.0"
I1219 02:33:52.175855 1 init.go:48] Using in-cluster config
I1219 02:33:52.176125 1 main.go:57] "Listening and serving insecurely on" address="0.0.0.0:8000"
==> storage-provisioner [3d1f591b08793b5957d08a468189d3db4ab4eaedd80f3caadc21cc563081e0bc] <==
W1219 02:33:30.860526 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:33:32.863699 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:33:32.868447 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:33:34.871997 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:33:34.878290 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:33:36.881069 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:33:36.886009 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:33:38.889528 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:33:38.893935 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:33:40.897482 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:33:40.903706 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:33:42.906996 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:33:42.910991 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:33:44.913833 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:33:44.917916 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:33:46.923458 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:33:46.928113 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:33:48.931681 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:33:48.936480 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:33:50.940547 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:33:50.947760 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:33:52.951401 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:33:52.955428 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:33:54.959461 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:33:54.964315 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
==> storage-provisioner [4d325bd49d2ae8ad9b2605e9bd9667579857876c54bc9b7f71034e1b896677a4] <==
I1219 02:32:36.959370 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-180941_ff5ea45f-7c5f-49a0-a6c1-a5aef2d7867c!
W1219 02:32:38.868430 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:32:38.872393 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:32:40.879172 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:32:40.884348 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:32:42.888523 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:32:42.893943 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:32:44.897326 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:32:44.902001 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:32:46.905734 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:32:46.910287 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:32:48.913330 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:32:48.919094 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:32:50.922852 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:32:50.927472 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:32:52.931112 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:32:52.935264 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:32:54.938285 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:32:54.944139 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:32:56.947802 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:32:56.951695 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:32:58.954942 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:32:58.959120 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:33:00.962878 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1219 02:33:00.968772 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
-- /stdout --
helpers_test.go:263: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-180941 -n functional-180941
helpers_test.go:270: (dbg) Run: kubectl --context functional-180941 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: busybox-mount hello-node-connect-7d85dfc575-cmmxr nginx-svc sp-pod kubernetes-dashboard-api-c5cfcc8f7-d8jxv kubernetes-dashboard-auth-57b44fc5d4-glcd2 kubernetes-dashboard-kong-9849c64bd-m8mlk kubernetes-dashboard-metrics-scraper-7685fd8b77-8w988
helpers_test.go:283: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:286: (dbg) Run: kubectl --context functional-180941 describe pod busybox-mount hello-node-connect-7d85dfc575-cmmxr nginx-svc sp-pod kubernetes-dashboard-api-c5cfcc8f7-d8jxv kubernetes-dashboard-auth-57b44fc5d4-glcd2 kubernetes-dashboard-kong-9849c64bd-m8mlk kubernetes-dashboard-metrics-scraper-7685fd8b77-8w988
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context functional-180941 describe pod busybox-mount hello-node-connect-7d85dfc575-cmmxr nginx-svc sp-pod kubernetes-dashboard-api-c5cfcc8f7-d8jxv kubernetes-dashboard-auth-57b44fc5d4-glcd2 kubernetes-dashboard-kong-9849c64bd-m8mlk kubernetes-dashboard-metrics-scraper-7685fd8b77-8w988: exit status 1 (109.900604ms)
-- stdout --
Name: busybox-mount
Namespace: default
Priority: 0
Service Account: default
Node: functional-180941/192.168.49.2
Start Time: Fri, 19 Dec 2025 02:33:41 +0000
Labels: integration-test=busybox-mount
Annotations: <none>
Status: Succeeded
IP: 10.244.0.5
IPs:
IP: 10.244.0.5
Containers:
mount-munger:
Container ID: containerd://01e01e1796cdd5d38f245adb949d3c722b1407a97c69d13b52526d758d15d83e
Image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID: gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
Port: <none>
Host Port: <none>
Command:
/bin/sh
-c
--
Args:
cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
State: Terminated
Reason: Completed
Exit Code: 0
Started: Fri, 19 Dec 2025 02:33:44 +0000
Finished: Fri, 19 Dec 2025 02:33:44 +0000
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/mount-9p from test-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-v7rm6 (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
test-volume:
Type: HostPath (bare host directory volume)
Path: /mount-9p
HostPathType:
kube-api-access-v7rm6:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 15s default-scheduler Successfully assigned default/busybox-mount to functional-180941
Normal Pulling 14s kubelet spec.containers{mount-munger}: Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Normal Pulled 12s kubelet spec.containers{mount-munger}: Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.337s (2.337s including waiting). Image size: 2395207 bytes.
Normal Created 12s kubelet spec.containers{mount-munger}: Created container: mount-munger
Normal Started 12s kubelet spec.containers{mount-munger}: Started container mount-munger
Name: hello-node-connect-7d85dfc575-cmmxr
Namespace: default
Priority: 0
Service Account: default
Node: functional-180941/192.168.49.2
Start Time: Fri, 19 Dec 2025 02:33:55 +0000
Labels: app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:
Image: kicbase/echo-server
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sv7gz (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-sv7gz:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 1s default-scheduler Successfully assigned default/hello-node-connect-7d85dfc575-cmmxr to functional-180941
Normal Pulling 1s kubelet spec.containers{echo-server}: Pulling image "kicbase/echo-server"
Name: nginx-svc
Namespace: default
Priority: 0
Service Account: default
Node: functional-180941/192.168.49.2
Start Time: Fri, 19 Dec 2025 02:33:53 +0000
Labels: run=nginx-svc
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Containers:
nginx:
Container ID:
Image: public.ecr.aws/nginx/nginx:alpine
Image ID:
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-69dzt (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-69dzt:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3s default-scheduler Successfully assigned default/nginx-svc to functional-180941
Normal Pulling 3s kubelet spec.containers{nginx}: Pulling image "public.ecr.aws/nginx/nginx:alpine"
Name: sp-pod
Namespace: default
Priority: 0
Service Account: default
Node: functional-180941/192.168.49.2
Start Time: Fri, 19 Dec 2025 02:33:56 +0000
Labels: test=storage-provisioner
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Containers:
myfrontend:
Container ID:
Image: public.ecr.aws/nginx/nginx:alpine
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pnlff (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
mypd:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: myclaim
ReadOnly: false
kube-api-access-pnlff:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 0s default-scheduler Successfully assigned default/sp-pod to functional-180941
-- /stdout --
** stderr **
Error from server (NotFound): pods "kubernetes-dashboard-api-c5cfcc8f7-d8jxv" not found
Error from server (NotFound): pods "kubernetes-dashboard-auth-57b44fc5d4-glcd2" not found
Error from server (NotFound): pods "kubernetes-dashboard-kong-9849c64bd-m8mlk" not found
Error from server (NotFound): pods "kubernetes-dashboard-metrics-scraper-7685fd8b77-8w988" not found
** /stderr **
helpers_test.go:288: kubectl --context functional-180941 describe pod busybox-mount hello-node-connect-7d85dfc575-cmmxr nginx-svc sp-pod kubernetes-dashboard-api-c5cfcc8f7-d8jxv kubernetes-dashboard-auth-57b44fc5d4-glcd2 kubernetes-dashboard-kong-9849c64bd-m8mlk kubernetes-dashboard-metrics-scraper-7685fd8b77-8w988: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (15.50s)