=== RUN TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd
=== CONT TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-813066 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-813066 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-813066 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-813066 --alsologtostderr -v=1] stderr:
I1014 19:22:11.582581 89618 out.go:360] Setting OutFile to fd 1 ...
I1014 19:22:11.582844 89618 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1014 19:22:11.582854 89618 out.go:374] Setting ErrFile to fd 2...
I1014 19:22:11.582859 89618 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1014 19:22:11.583062 89618 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-77011/.minikube/bin
I1014 19:22:11.583342 89618 mustload.go:65] Loading cluster: functional-813066
I1014 19:22:11.583677 89618 config.go:182] Loaded profile config "functional-813066": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1014 19:22:11.584049 89618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1014 19:22:11.584120 89618 main.go:141] libmachine: Launching plugin server for driver kvm2
I1014 19:22:11.597935 89618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36317
I1014 19:22:11.598429 89618 main.go:141] libmachine: () Calling .GetVersion
I1014 19:22:11.599086 89618 main.go:141] libmachine: Using API Version 1
I1014 19:22:11.599123 89618 main.go:141] libmachine: () Calling .SetConfigRaw
I1014 19:22:11.599478 89618 main.go:141] libmachine: () Calling .GetMachineName
I1014 19:22:11.599738 89618 main.go:141] libmachine: (functional-813066) Calling .GetState
I1014 19:22:11.601587 89618 host.go:66] Checking if "functional-813066" exists ...
I1014 19:22:11.602082 89618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1014 19:22:11.602138 89618 main.go:141] libmachine: Launching plugin server for driver kvm2
I1014 19:22:11.616839 89618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36127
I1014 19:22:11.617373 89618 main.go:141] libmachine: () Calling .GetVersion
I1014 19:22:11.617922 89618 main.go:141] libmachine: Using API Version 1
I1014 19:22:11.617969 89618 main.go:141] libmachine: () Calling .SetConfigRaw
I1014 19:22:11.618294 89618 main.go:141] libmachine: () Calling .GetMachineName
I1014 19:22:11.618529 89618 main.go:141] libmachine: (functional-813066) Calling .DriverName
I1014 19:22:11.618710 89618 api_server.go:166] Checking apiserver status ...
I1014 19:22:11.618779 89618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1014 19:22:11.618803 89618 main.go:141] libmachine: (functional-813066) Calling .GetSSHHostname
I1014 19:22:11.622205 89618 main.go:141] libmachine: (functional-813066) DBG | domain functional-813066 has defined MAC address 52:54:00:e8:5a:36 in network mk-functional-813066
I1014 19:22:11.622752 89618 main.go:141] libmachine: (functional-813066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:5a:36", ip: ""} in network mk-functional-813066: {Iface:virbr1 ExpiryTime:2025-10-14 20:18:49 +0000 UTC Type:0 Mac:52:54:00:e8:5a:36 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:functional-813066 Clientid:01:52:54:00:e8:5a:36}
I1014 19:22:11.622792 89618 main.go:141] libmachine: (functional-813066) DBG | domain functional-813066 has defined IP address 192.168.39.244 and MAC address 52:54:00:e8:5a:36 in network mk-functional-813066
I1014 19:22:11.622983 89618 main.go:141] libmachine: (functional-813066) Calling .GetSSHPort
I1014 19:22:11.623174 89618 main.go:141] libmachine: (functional-813066) Calling .GetSSHKeyPath
I1014 19:22:11.623344 89618 main.go:141] libmachine: (functional-813066) Calling .GetSSHUsername
I1014 19:22:11.623493 89618 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-77011/.minikube/machines/functional-813066/id_rsa Username:docker}
I1014 19:22:11.724095 89618 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5397/cgroup
W1014 19:22:11.737200 89618 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5397/cgroup: Process exited with status 1
stdout:
stderr:
I1014 19:22:11.737260 89618 ssh_runner.go:195] Run: ls
I1014 19:22:11.742993 89618 api_server.go:253] Checking apiserver healthz at https://192.168.39.244:8441/healthz ...
I1014 19:22:11.748702 89618 api_server.go:279] https://192.168.39.244:8441/healthz returned 200:
ok
W1014 19:22:11.748751 89618 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1014 19:22:11.748910 89618 config.go:182] Loaded profile config "functional-813066": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1014 19:22:11.748922 89618 addons.go:69] Setting dashboard=true in profile "functional-813066"
I1014 19:22:11.748929 89618 addons.go:238] Setting addon dashboard=true in "functional-813066"
I1014 19:22:11.748969 89618 host.go:66] Checking if "functional-813066" exists ...
I1014 19:22:11.749238 89618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1014 19:22:11.749277 89618 main.go:141] libmachine: Launching plugin server for driver kvm2
I1014 19:22:11.763080 89618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39507
I1014 19:22:11.763518 89618 main.go:141] libmachine: () Calling .GetVersion
I1014 19:22:11.764014 89618 main.go:141] libmachine: Using API Version 1
I1014 19:22:11.764036 89618 main.go:141] libmachine: () Calling .SetConfigRaw
I1014 19:22:11.764341 89618 main.go:141] libmachine: () Calling .GetMachineName
I1014 19:22:11.764841 89618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1014 19:22:11.764880 89618 main.go:141] libmachine: Launching plugin server for driver kvm2
I1014 19:22:11.778481 89618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44751
I1014 19:22:11.778975 89618 main.go:141] libmachine: () Calling .GetVersion
I1014 19:22:11.779500 89618 main.go:141] libmachine: Using API Version 1
I1014 19:22:11.779523 89618 main.go:141] libmachine: () Calling .SetConfigRaw
I1014 19:22:11.779808 89618 main.go:141] libmachine: () Calling .GetMachineName
I1014 19:22:11.780009 89618 main.go:141] libmachine: (functional-813066) Calling .GetState
I1014 19:22:11.781797 89618 main.go:141] libmachine: (functional-813066) Calling .DriverName
I1014 19:22:11.783768 89618 out.go:179] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1014 19:22:11.785010 89618 out.go:179] - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1014 19:22:11.786245 89618 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1014 19:22:11.786259 89618 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1014 19:22:11.786277 89618 main.go:141] libmachine: (functional-813066) Calling .GetSSHHostname
I1014 19:22:11.789541 89618 main.go:141] libmachine: (functional-813066) DBG | domain functional-813066 has defined MAC address 52:54:00:e8:5a:36 in network mk-functional-813066
I1014 19:22:11.790021 89618 main.go:141] libmachine: (functional-813066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:5a:36", ip: ""} in network mk-functional-813066: {Iface:virbr1 ExpiryTime:2025-10-14 20:18:49 +0000 UTC Type:0 Mac:52:54:00:e8:5a:36 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:functional-813066 Clientid:01:52:54:00:e8:5a:36}
I1014 19:22:11.790063 89618 main.go:141] libmachine: (functional-813066) DBG | domain functional-813066 has defined IP address 192.168.39.244 and MAC address 52:54:00:e8:5a:36 in network mk-functional-813066
I1014 19:22:11.790262 89618 main.go:141] libmachine: (functional-813066) Calling .GetSSHPort
I1014 19:22:11.790505 89618 main.go:141] libmachine: (functional-813066) Calling .GetSSHKeyPath
I1014 19:22:11.790662 89618 main.go:141] libmachine: (functional-813066) Calling .GetSSHUsername
I1014 19:22:11.790861 89618 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-77011/.minikube/machines/functional-813066/id_rsa Username:docker}
I1014 19:22:11.892231 89618 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1014 19:22:11.892263 89618 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1014 19:22:11.915980 89618 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1014 19:22:11.916028 89618 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1014 19:22:11.941264 89618 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1014 19:22:11.941289 89618 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1014 19:22:11.965866 89618 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1014 19:22:11.965891 89618 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1014 19:22:11.991679 89618 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I1014 19:22:11.991713 89618 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1014 19:22:12.017691 89618 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1014 19:22:12.017718 89618 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1014 19:22:12.043214 89618 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1014 19:22:12.043242 89618 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1014 19:22:12.067494 89618 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1014 19:22:12.067520 89618 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1014 19:22:12.091330 89618 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1014 19:22:12.091357 89618 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1014 19:22:12.116968 89618 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1014 19:22:12.865482 89618 main.go:141] libmachine: Making call to close driver server
I1014 19:22:12.865516 89618 main.go:141] libmachine: (functional-813066) Calling .Close
I1014 19:22:12.865989 89618 main.go:141] libmachine: Successfully made call to close driver server
I1014 19:22:12.866011 89618 main.go:141] libmachine: Making call to close connection to plugin binary
I1014 19:22:12.866032 89618 main.go:141] libmachine: Making call to close driver server
I1014 19:22:12.866040 89618 main.go:141] libmachine: (functional-813066) Calling .Close
I1014 19:22:12.866284 89618 main.go:141] libmachine: (functional-813066) DBG | Closing plugin on server side
I1014 19:22:12.866303 89618 main.go:141] libmachine: Successfully made call to close driver server
I1014 19:22:12.866312 89618 main.go:141] libmachine: Making call to close connection to plugin binary
I1014 19:22:12.868084 89618 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p functional-813066 addons enable metrics-server
I1014 19:22:12.868927 89618 addons.go:201] Writing out "functional-813066" config to set dashboard=true...
W1014 19:22:12.869166 89618 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1014 19:22:12.869807 89618 kapi.go:59] client config for functional-813066: &rest.Config{Host:"https://192.168.39.244:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-77011/.minikube/profiles/functional-813066/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-77011/.minikube/profiles/functional-813066/client.key", CAFile:"/home/jenkins/minikube-integration/21409-77011/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1014 19:22:12.870276 89618 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1014 19:22:12.870292 89618 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1014 19:22:12.870296 89618 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1014 19:22:12.870299 89618 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1014 19:22:12.870303 89618 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1014 19:22:12.879684 89618 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard kubernetes-dashboard 0d6ab439-b085-42b1-b3de-6fd4e2183443 852 0 2025-10-14 19:22:12 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-10-14 19:22:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.99.42.253,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.99.42.253],IPFamilies:[IPv4],AllocateLoadBalancerN
odePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1014 19:22:12.879827 89618 out.go:285] * Launching proxy ...
* Launching proxy ...
I1014 19:22:12.879890 89618 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-813066 proxy --port 36195]
I1014 19:22:12.880218 89618 dashboard.go:157] Waiting for kubectl to output host:port ...
I1014 19:22:12.926257 89618 out.go:203]
W1014 19:22:12.927354 89618 out.go:285] X Exiting due to HOST_KUBECTL_PROXY: kubectl proxy: readByteWithTimeout: EOF
X Exiting due to HOST_KUBECTL_PROXY: kubectl proxy: readByteWithTimeout: EOF
W1014 19:22:12.927369 89618 out.go:285] *
*
W1014 19:22:12.932068 89618 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ * Please also attach the following file to the GitHub issue: │
│ * - /tmp/minikube_dashboard_2f9e80c8c4dc47927ad6915561a20c5705c3b3b4_0.log │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ * Please also attach the following file to the GitHub issue: │
│ * - /tmp/minikube_dashboard_2f9e80c8c4dc47927ad6915561a20c5705c3b3b4_0.log │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1014 19:22:12.933681 89618 out.go:203]
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p functional-813066 -n functional-813066
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run: out/minikube-linux-amd64 -p functional-813066 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-813066 logs -n 25: (2.130607215s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs:
-- stdout --
==> Audit <==
┌───────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├───────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ addons │ functional-813066 addons list │ functional-813066 │ jenkins │ v1.37.0 │ 14 Oct 25 19:21 UTC │ 14 Oct 25 19:21 UTC │
│ addons │ functional-813066 addons list -o json │ functional-813066 │ jenkins │ v1.37.0 │ 14 Oct 25 19:21 UTC │ 14 Oct 25 19:21 UTC │
│ image │ functional-813066 image ls │ functional-813066 │ jenkins │ v1.37.0 │ 14 Oct 25 19:21 UTC │ 14 Oct 25 19:21 UTC │
│ image │ functional-813066 image load --daemon kicbase/echo-server:functional-813066 --alsologtostderr │ functional-813066 │ jenkins │ v1.37.0 │ 14 Oct 25 19:21 UTC │ 14 Oct 25 19:21 UTC │
│ image │ functional-813066 image ls │ functional-813066 │ jenkins │ v1.37.0 │ 14 Oct 25 19:21 UTC │ 14 Oct 25 19:21 UTC │
│ image │ functional-813066 image load --daemon kicbase/echo-server:functional-813066 --alsologtostderr │ functional-813066 │ jenkins │ v1.37.0 │ 14 Oct 25 19:21 UTC │ 14 Oct 25 19:21 UTC │
│ image │ functional-813066 image ls │ functional-813066 │ jenkins │ v1.37.0 │ 14 Oct 25 19:21 UTC │ 14 Oct 25 19:21 UTC │
│ image │ functional-813066 image save kicbase/echo-server:functional-813066 /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr │ functional-813066 │ jenkins │ v1.37.0 │ 14 Oct 25 19:21 UTC │ 14 Oct 25 19:21 UTC │
│ image │ functional-813066 image rm kicbase/echo-server:functional-813066 --alsologtostderr │ functional-813066 │ jenkins │ v1.37.0 │ 14 Oct 25 19:21 UTC │ 14 Oct 25 19:21 UTC │
│ image │ functional-813066 image ls │ functional-813066 │ jenkins │ v1.37.0 │ 14 Oct 25 19:21 UTC │ 14 Oct 25 19:21 UTC │
│ image │ functional-813066 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr │ functional-813066 │ jenkins │ v1.37.0 │ 14 Oct 25 19:21 UTC │ 14 Oct 25 19:21 UTC │
│ image │ functional-813066 image ls │ functional-813066 │ jenkins │ v1.37.0 │ 14 Oct 25 19:21 UTC │ 14 Oct 25 19:21 UTC │
│ image │ functional-813066 image save --daemon kicbase/echo-server:functional-813066 --alsologtostderr │ functional-813066 │ jenkins │ v1.37.0 │ 14 Oct 25 19:21 UTC │ 14 Oct 25 19:21 UTC │
│ service │ functional-813066 service hello-node-connect --url │ functional-813066 │ jenkins │ v1.37.0 │ 14 Oct 25 19:22 UTC │ 14 Oct 25 19:22 UTC │
│ start │ -p functional-813066 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 --container-runtime=containerd --auto-update-drivers=false │ functional-813066 │ jenkins │ v1.37.0 │ 14 Oct 25 19:22 UTC │ │
│ ssh │ functional-813066 ssh findmnt -T /mount-9p | grep 9p │ functional-813066 │ jenkins │ v1.37.0 │ 14 Oct 25 19:22 UTC │ │
│ mount │ -p functional-813066 /tmp/TestFunctionalparallelMountCmdany-port1541284530/001:/mount-9p --alsologtostderr -v=1 │ functional-813066 │ jenkins │ v1.37.0 │ 14 Oct 25 19:22 UTC │ │
│ ssh │ functional-813066 ssh findmnt -T /mount-9p | grep 9p │ functional-813066 │ jenkins │ v1.37.0 │ 14 Oct 25 19:22 UTC │ 14 Oct 25 19:22 UTC │
│ ssh │ functional-813066 ssh -- ls -la /mount-9p │ functional-813066 │ jenkins │ v1.37.0 │ 14 Oct 25 19:22 UTC │ 14 Oct 25 19:22 UTC │
│ ssh │ functional-813066 ssh cat /mount-9p/test-1760469726579443345 │ functional-813066 │ jenkins │ v1.37.0 │ 14 Oct 25 19:22 UTC │ 14 Oct 25 19:22 UTC │
│ ssh │ functional-813066 ssh sudo systemctl is-active docker │ functional-813066 │ jenkins │ v1.37.0 │ 14 Oct 25 19:22 UTC │ │
│ ssh │ functional-813066 ssh sudo systemctl is-active crio │ functional-813066 │ jenkins │ v1.37.0 │ 14 Oct 25 19:22 UTC │ │
│ start │ -p functional-813066 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 --container-runtime=containerd --auto-update-drivers=false │ functional-813066 │ jenkins │ v1.37.0 │ 14 Oct 25 19:22 UTC │ │
│ start │ -p functional-813066 --dry-run --alsologtostderr -v=1 --driver=kvm2 --container-runtime=containerd --auto-update-drivers=false │ functional-813066 │ jenkins │ v1.37.0 │ 14 Oct 25 19:22 UTC │ │
│ dashboard │ --url --port 36195 -p functional-813066 --alsologtostderr -v=1 │ functional-813066 │ jenkins │ v1.37.0 │ 14 Oct 25 19:22 UTC │ │
└───────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/10/14 19:22:11
Running on machine: ubuntu-20-agent-5
Binary: Built with gc go1.24.6 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1014 19:22:11.447964 89590 out.go:360] Setting OutFile to fd 1 ...
I1014 19:22:11.448243 89590 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1014 19:22:11.448255 89590 out.go:374] Setting ErrFile to fd 2...
I1014 19:22:11.448259 89590 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1014 19:22:11.448496 89590 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-77011/.minikube/bin
I1014 19:22:11.448993 89590 out.go:368] Setting JSON to false
I1014 19:22:11.449920 89590 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":7469,"bootTime":1760462262,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1014 19:22:11.450039 89590 start.go:141] virtualization: kvm guest
I1014 19:22:11.451741 89590 out.go:179] * [functional-813066] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
I1014 19:22:11.452825 89590 notify.go:220] Checking for updates...
I1014 19:22:11.452848 89590 out.go:179] - MINIKUBE_LOCATION=21409
I1014 19:22:11.454088 89590 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1014 19:22:11.455249 89590 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/21409-77011/kubeconfig
I1014 19:22:11.456416 89590 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-77011/.minikube
I1014 19:22:11.457496 89590 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I1014 19:22:11.458639 89590 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1014 19:22:11.460132 89590 config.go:182] Loaded profile config "functional-813066": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1014 19:22:11.460561 89590 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1014 19:22:11.460610 89590 main.go:141] libmachine: Launching plugin server for driver kvm2
I1014 19:22:11.474530 89590 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38235
I1014 19:22:11.475151 89590 main.go:141] libmachine: () Calling .GetVersion
I1014 19:22:11.475685 89590 main.go:141] libmachine: Using API Version 1
I1014 19:22:11.475706 89590 main.go:141] libmachine: () Calling .SetConfigRaw
I1014 19:22:11.476104 89590 main.go:141] libmachine: () Calling .GetMachineName
I1014 19:22:11.476364 89590 main.go:141] libmachine: (functional-813066) Calling .DriverName
I1014 19:22:11.476692 89590 driver.go:421] Setting default libvirt URI to qemu:///system
I1014 19:22:11.477123 89590 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1014 19:22:11.477196 89590 main.go:141] libmachine: Launching plugin server for driver kvm2
I1014 19:22:11.491937 89590 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42213
I1014 19:22:11.492472 89590 main.go:141] libmachine: () Calling .GetVersion
I1014 19:22:11.493004 89590 main.go:141] libmachine: Using API Version 1
I1014 19:22:11.493036 89590 main.go:141] libmachine: () Calling .SetConfigRaw
I1014 19:22:11.493371 89590 main.go:141] libmachine: () Calling .GetMachineName
I1014 19:22:11.493570 89590 main.go:141] libmachine: (functional-813066) Calling .DriverName
I1014 19:22:11.529635 89590 out.go:179] * Using the kvm2 driver based on existing profile
I1014 19:22:11.530703 89590 start.go:305] selected driver: kvm2
I1014 19:22:11.530719 89590 start.go:925] validating driver "kvm2" against &{Name:functional-813066 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-813066 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.244 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1014 19:22:11.530855 89590 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1014 19:22:11.532266 89590 cni.go:84] Creating CNI manager for ""
I1014 19:22:11.532327 89590 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I1014 19:22:11.532373 89590 start.go:349] cluster config:
{Name:functional-813066 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-813066 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.244 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1014 19:22:11.534390 89590 out.go:179] * dry-run validation complete!
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
625c65389f258 56cc512116c8f Less than a second ago Exited mount-munger 0 287184294b6d2 busybox-mount
7dc5ff39dcbd2 9056ab77afb8e 7 seconds ago Running echo-server 0 00d860ddc82e0 hello-node-75c85bcc94-8v96x
c7f463f486005 07ccdb7838758 8 seconds ago Exited myfrontend 0 54b27a10a3e4d sp-pod
05c8ea83d5ba2 9056ab77afb8e 15 seconds ago Running echo-server 0 d4aaf55ab94ec hello-node-connect-7d85dfc575-7f89b
0b79c612553d8 5107333e08a87 17 seconds ago Running mysql 0 61b013faa21c1 mysql-5bb876957f-mncgc
61e5f2fe5ea0c 6e38f40d628db 41 seconds ago Running storage-provisioner 4 718c6797c3240 storage-provisioner
2acb0d26ece89 fc25172553d79 54 seconds ago Running kube-proxy 2 578f69264ae41 kube-proxy-9bjgg
bd3253885f988 52546a367cc9e 54 seconds ago Running coredns 2 ecff298846de6 coredns-66bc5c9577-z289p
cbd90021f8f8b 6e38f40d628db 54 seconds ago Exited storage-provisioner 3 718c6797c3240 storage-provisioner
a5d81d9fe5fa6 c3994bc696102 58 seconds ago Running kube-apiserver 0 9c99fcad086ea kube-apiserver-functional-813066
a4ac300cdbe10 c80c8dbafe7dd 58 seconds ago Running kube-controller-manager 3 d31c96792b4bc kube-controller-manager-functional-813066
7774b54955bca 7dd6aaa1717ab 58 seconds ago Running kube-scheduler 2 a75c8ae510333 kube-scheduler-functional-813066
6258054b30fd8 5f1f5298c888d About a minute ago Running etcd 2 2ffdac1d5783e etcd-functional-813066
49fb2c37ba348 c80c8dbafe7dd About a minute ago Exited kube-controller-manager 2 d31c96792b4bc kube-controller-manager-functional-813066
80ea16c1a937b 5f1f5298c888d 2 minutes ago Exited etcd 1 2ffdac1d5783e etcd-functional-813066
9aeb7d4fda842 52546a367cc9e 2 minutes ago Exited coredns 1 ecff298846de6 coredns-66bc5c9577-z289p
8e4ea712a1f42 fc25172553d79 2 minutes ago Exited kube-proxy 1 578f69264ae41 kube-proxy-9bjgg
a5ca6625d76fa 7dd6aaa1717ab 2 minutes ago Exited kube-scheduler 1 a75c8ae510333 kube-scheduler-functional-813066
==> containerd <==
Oct 14 19:22:13 functional-813066 containerd[4505]: time="2025-10-14T19:22:13.510726493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:dashboard-metrics-scraper-77bf4d6c4c-q682n,Uid:07ae4d38-3b9f-4aa3-a8b2-a544abe2f710,Namespace:kubernetes-dashboard,Attempt:0,} returns sandbox id \"fc9d1442098578b39465b4df078ad53b1dbf50274a098e153d4664cbea3fcacc\""
Oct 14 19:22:13 functional-813066 containerd[4505]: time="2025-10-14T19:22:13.517945046Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
Oct 14 19:22:13 functional-813066 containerd[4505]: time="2025-10-14T19:22:13.532667492Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
Oct 14 19:22:13 functional-813066 containerd[4505]: time="2025-10-14T19:22:13.549221920Z" level=info msg="CreateContainer within sandbox \"287184294b6d2372bd3f7a903481cf6caad3a18b0cfcf47b7aa28db2634c8061\" for &ContainerMetadata{Name:mount-munger,Attempt:0,} returns container id \"625c65389f2581f493f660f66bf4133b3fa12ad88c6a073d04bfe8fdd6187b77\""
Oct 14 19:22:13 functional-813066 containerd[4505]: time="2025-10-14T19:22:13.551725290Z" level=info msg="StartContainer for \"625c65389f2581f493f660f66bf4133b3fa12ad88c6a073d04bfe8fdd6187b77\""
Oct 14 19:22:13 functional-813066 containerd[4505]: time="2025-10-14T19:22:13.680940456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kubernetes-dashboard-855c9754f9-8pkvp,Uid:aca0e0e4-8bdc-4316-8ec4-b02d17386444,Namespace:kubernetes-dashboard,Attempt:0,} returns sandbox id \"cf2cfa3b3ae24d1abdc98cb777a2bc4959a0af4b6b5ae159dd58c0ac8bc51d73\""
Oct 14 19:22:13 functional-813066 containerd[4505]: time="2025-10-14T19:22:13.714905112Z" level=info msg="StartContainer for \"625c65389f2581f493f660f66bf4133b3fa12ad88c6a073d04bfe8fdd6187b77\" returns successfully"
Oct 14 19:22:13 functional-813066 containerd[4505]: time="2025-10-14T19:22:13.790562479Z" level=info msg="StopContainer for \"c7f463f486005af5e49063ff53cc3d2258fb645ef96ed047c7fe6c905a1582a0\" with timeout 30 (s)"
Oct 14 19:22:13 functional-813066 containerd[4505]: time="2025-10-14T19:22:13.792650178Z" level=info msg="Stop container \"c7f463f486005af5e49063ff53cc3d2258fb645ef96ed047c7fe6c905a1582a0\" with signal quit"
Oct 14 19:22:14 functional-813066 containerd[4505]: time="2025-10-14T19:22:14.046579662Z" level=info msg="shim disconnected" id=c7f463f486005af5e49063ff53cc3d2258fb645ef96ed047c7fe6c905a1582a0 namespace=k8s.io
Oct 14 19:22:14 functional-813066 containerd[4505]: time="2025-10-14T19:22:14.046631250Z" level=warning msg="cleaning up after shim disconnected" id=c7f463f486005af5e49063ff53cc3d2258fb645ef96ed047c7fe6c905a1582a0 namespace=k8s.io
Oct 14 19:22:14 functional-813066 containerd[4505]: time="2025-10-14T19:22:14.046641780Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Oct 14 19:22:14 functional-813066 containerd[4505]: time="2025-10-14T19:22:14.050340751Z" level=info msg="shim disconnected" id=625c65389f2581f493f660f66bf4133b3fa12ad88c6a073d04bfe8fdd6187b77 namespace=k8s.io
Oct 14 19:22:14 functional-813066 containerd[4505]: time="2025-10-14T19:22:14.050413084Z" level=warning msg="cleaning up after shim disconnected" id=625c65389f2581f493f660f66bf4133b3fa12ad88c6a073d04bfe8fdd6187b77 namespace=k8s.io
Oct 14 19:22:14 functional-813066 containerd[4505]: time="2025-10-14T19:22:14.050424264Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Oct 14 19:22:14 functional-813066 containerd[4505]: time="2025-10-14T19:22:14.129402179Z" level=info msg="StopContainer for \"c7f463f486005af5e49063ff53cc3d2258fb645ef96ed047c7fe6c905a1582a0\" returns successfully"
Oct 14 19:22:14 functional-813066 containerd[4505]: time="2025-10-14T19:22:14.130935989Z" level=info msg="StopPodSandbox for \"54b27a10a3e4dcdeda05c5305cd8d61e0cb1496faf9359484cdfda2837d7221e\""
Oct 14 19:22:14 functional-813066 containerd[4505]: time="2025-10-14T19:22:14.131324250Z" level=info msg="Container to stop \"c7f463f486005af5e49063ff53cc3d2258fb645ef96ed047c7fe6c905a1582a0\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Oct 14 19:22:14 functional-813066 containerd[4505]: time="2025-10-14T19:22:14.202053321Z" level=info msg="shim disconnected" id=54b27a10a3e4dcdeda05c5305cd8d61e0cb1496faf9359484cdfda2837d7221e namespace=k8s.io
Oct 14 19:22:14 functional-813066 containerd[4505]: time="2025-10-14T19:22:14.202310337Z" level=warning msg="cleaning up after shim disconnected" id=54b27a10a3e4dcdeda05c5305cd8d61e0cb1496faf9359484cdfda2837d7221e namespace=k8s.io
Oct 14 19:22:14 functional-813066 containerd[4505]: time="2025-10-14T19:22:14.202428061Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Oct 14 19:22:14 functional-813066 containerd[4505]: time="2025-10-14T19:22:14.229744634Z" level=warning msg="cleanup warnings time=\"2025-10-14T19:22:14Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
Oct 14 19:22:14 functional-813066 containerd[4505]: time="2025-10-14T19:22:14.316294063Z" level=info msg="TearDown network for sandbox \"54b27a10a3e4dcdeda05c5305cd8d61e0cb1496faf9359484cdfda2837d7221e\" successfully"
Oct 14 19:22:14 functional-813066 containerd[4505]: time="2025-10-14T19:22:14.316340420Z" level=info msg="StopPodSandbox for \"54b27a10a3e4dcdeda05c5305cd8d61e0cb1496faf9359484cdfda2837d7221e\" returns successfully"
Oct 14 19:22:14 functional-813066 containerd[4505]: time="2025-10-14T19:22:14.416275934Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
==> coredns [9aeb7d4fda842c06252751a39d3efd112759d66c27e2808c97c8ebf77a8f7eee] <==
maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
.:53
[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
CoreDNS-1.12.1
linux/amd64, go1.24.1, 707c7c1
[INFO] 127.0.0.1:55450 - 24595 "HINFO IN 9186905565933172441.5829961822610941402. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.133331801s
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=453": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
==> coredns [bd3253885f9884d011496bd37b6aecd95ad3c12f561b0909e8f4f6812aa2b191] <==
maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
.:53
[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
CoreDNS-1.12.1
linux/amd64, go1.24.1, 707c7c1
[INFO] 127.0.0.1:43811 - 16082 "HINFO IN 6872004173303603088.8086144694367984731. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.018411681s
==> describe nodes <==
Name: functional-813066
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=functional-813066
kubernetes.io/os=linux
minikube.k8s.io/commit=93e78e130b0f4e4236c23940daf4ba8b68c76662
minikube.k8s.io/name=functional-813066
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_10_14T19_19_14_0700
minikube.k8s.io/version=v1.37.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 14 Oct 2025 19:19:10 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: functional-813066
AcquireTime: <unset>
RenewTime: Tue, 14 Oct 2025 19:22:10 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Tue, 14 Oct 2025 19:21:49 +0000 Tue, 14 Oct 2025 19:19:08 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Tue, 14 Oct 2025 19:21:49 +0000 Tue, 14 Oct 2025 19:19:08 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Tue, 14 Oct 2025 19:21:49 +0000 Tue, 14 Oct 2025 19:19:08 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Tue, 14 Oct 2025 19:21:49 +0000 Tue, 14 Oct 2025 19:19:14 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.244
Hostname: functional-813066
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4008588Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4008588Ki
pods: 110
System Info:
Machine ID: 28a34433245b426f9230dfb4b8715b4a
System UUID: 28a34433-245b-426f-9230-dfb4b8715b4a
Boot ID: de527f9a-b75f-44bc-818f-1def29d787bf
Kernel Version: 6.6.95
OS Image: Buildroot 2025.02
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.7.23
Kubelet Version: v1.34.1
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (13 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox-mount 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6s
default hello-node-75c85bcc94-8v96x 0 (0%) 0 (0%) 0 (0%) 0 (0%) 22s
default hello-node-connect-7d85dfc575-7f89b 0 (0%) 0 (0%) 0 (0%) 0 (0%) 30s
default mysql-5bb876957f-mncgc 600m (30%) 700m (35%) 512Mi (13%) 700Mi (17%) 32s
kube-system coredns-66bc5c9577-z289p 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 2m56s
kube-system etcd-functional-813066 100m (5%) 0 (0%) 100Mi (2%) 0 (0%) 3m1s
kube-system kube-apiserver-functional-813066 250m (12%) 0 (0%) 0 (0%) 0 (0%) 55s
kube-system kube-controller-manager-functional-813066 200m (10%) 0 (0%) 0 (0%) 0 (0%) 3m2s
kube-system kube-proxy-9bjgg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m56s
kube-system kube-scheduler-functional-813066 100m (5%) 0 (0%) 0 (0%) 0 (0%) 3m3s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m54s
kubernetes-dashboard dashboard-metrics-scraper-77bf4d6c4c-q682n 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2s
kubernetes-dashboard kubernetes-dashboard-855c9754f9-8pkvp 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 1350m (67%) 700m (35%)
memory 682Mi (17%) 870Mi (22%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 2m54s kube-proxy
Normal Starting 54s kube-proxy
Normal Starting 2m7s kube-proxy
Normal Starting 3m1s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 3m1s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 3m kubelet Node functional-813066 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 3m kubelet Node functional-813066 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 3m kubelet Node functional-813066 status is now: NodeHasSufficientPID
Normal NodeReady 3m kubelet Node functional-813066 status is now: NodeReady
Normal RegisteredNode 2m57s node-controller Node functional-813066 event: Registered Node functional-813066 in Controller
Normal Starting 110s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 109s (x8 over 109s) kubelet Node functional-813066 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 109s (x8 over 109s) kubelet Node functional-813066 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 109s (x7 over 109s) kubelet Node functional-813066 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 109s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 103s node-controller Node functional-813066 event: Registered Node functional-813066 in Controller
Normal NodeHasNoDiskPressure 59s (x8 over 59s) kubelet Node functional-813066 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientMemory 59s (x8 over 59s) kubelet Node functional-813066 status is now: NodeHasSufficientMemory
Normal Starting 59s kubelet Starting kubelet.
Normal NodeHasSufficientPID 59s (x7 over 59s) kubelet Node functional-813066 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 59s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 52s node-controller Node functional-813066 event: Registered Node functional-813066 in Controller
==> dmesg <==
[ +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
[ +0.084975] kauditd_printk_skb: 1 callbacks suppressed
[Oct14 19:19] kauditd_printk_skb: 74 callbacks suppressed
[ +0.104254] kauditd_printk_skb: 46 callbacks suppressed
[ +0.149686] kauditd_printk_skb: 171 callbacks suppressed
[ +0.241151] kauditd_printk_skb: 18 callbacks suppressed
[ +11.075914] kauditd_printk_skb: 271 callbacks suppressed
[ +22.704337] kauditd_printk_skb: 16 callbacks suppressed
[Oct14 19:20] kauditd_printk_skb: 84 callbacks suppressed
[ +5.066167] kauditd_printk_skb: 28 callbacks suppressed
[ +6.195426] kauditd_printk_skb: 44 callbacks suppressed
[ +9.327215] kauditd_printk_skb: 29 callbacks suppressed
[ +3.382264] kauditd_printk_skb: 43 callbacks suppressed
[ +11.153583] kauditd_printk_skb: 8 callbacks suppressed
[ +0.116438] kauditd_printk_skb: 12 callbacks suppressed
[Oct14 19:21] kauditd_printk_skb: 107 callbacks suppressed
[ +4.499828] kauditd_printk_skb: 24 callbacks suppressed
[ +4.183006] kauditd_printk_skb: 75 callbacks suppressed
[ +11.491632] kauditd_printk_skb: 43 callbacks suppressed
[ +5.426979] kauditd_printk_skb: 5 callbacks suppressed
[ +0.065891] kauditd_printk_skb: 112 callbacks suppressed
[ +0.000105] kauditd_printk_skb: 65 callbacks suppressed
[ +4.169615] kauditd_printk_skb: 74 callbacks suppressed
[Oct14 19:22] kauditd_printk_skb: 32 callbacks suppressed
[ +4.336296] kauditd_printk_skb: 62 callbacks suppressed
==> etcd [6258054b30fd8d53e67db4fdc1a15516e083db6144b35b9d9ad46fcefcc54c46] <==
{"level":"warn","ts":"2025-10-14T19:21:18.185300Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49664","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-14T19:21:18.201906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49670","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-14T19:21:18.206527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49690","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-14T19:21:18.223023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49708","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-14T19:21:18.231941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49726","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-14T19:21:18.248034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49754","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-14T19:21:18.253506Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49782","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-14T19:21:18.263815Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49784","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-14T19:21:18.273145Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49800","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-14T19:21:18.284014Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49806","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-14T19:21:18.294329Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49830","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-14T19:21:18.300907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49842","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-14T19:21:18.311279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49862","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-14T19:21:18.321611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49898","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-14T19:21:18.328332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49904","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-14T19:21:18.337841Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49922","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-14T19:21:18.352104Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49938","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-14T19:21:18.421128Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49968","server-name":"","error":"EOF"}
{"level":"info","ts":"2025-10-14T19:21:52.572957Z","caller":"traceutil/trace.go:172","msg":"trace[287345922] transaction","detail":"{read_only:false; response_revision:736; number_of_response:1; }","duration":"213.644683ms","start":"2025-10-14T19:21:52.359298Z","end":"2025-10-14T19:21:52.572942Z","steps":["trace[287345922] 'process raft request' (duration: 213.306051ms)"],"step_count":1}
{"level":"info","ts":"2025-10-14T19:21:54.364602Z","caller":"traceutil/trace.go:172","msg":"trace[1092999231] linearizableReadLoop","detail":"{readStateIndex:838; appliedIndex:838; }","duration":"183.069013ms","start":"2025-10-14T19:21:54.181514Z","end":"2025-10-14T19:21:54.364583Z","steps":["trace[1092999231] 'read index received' (duration: 183.064517ms)","trace[1092999231] 'applied index is now lower than readState.Index' (duration: 3.781µs)"],"step_count":2}
{"level":"warn","ts":"2025-10-14T19:21:54.364748Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"183.182288ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-10-14T19:21:54.364803Z","caller":"traceutil/trace.go:172","msg":"trace[46907595] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:754; }","duration":"183.285637ms","start":"2025-10-14T19:21:54.181510Z","end":"2025-10-14T19:21:54.364796Z","steps":["trace[46907595] 'agreement among raft nodes before linearized reading' (duration: 183.153341ms)"],"step_count":1}
{"level":"info","ts":"2025-10-14T19:21:54.365092Z","caller":"traceutil/trace.go:172","msg":"trace[154408117] transaction","detail":"{read_only:false; response_revision:755; number_of_response:1; }","duration":"243.71353ms","start":"2025-10-14T19:21:54.121371Z","end":"2025-10-14T19:21:54.365085Z","steps":["trace[154408117] 'process raft request' (duration: 243.61316ms)"],"step_count":1}
{"level":"warn","ts":"2025-10-14T19:22:14.043901Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"134.879797ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/busybox-mount\" limit:1 ","response":"range_response_count:1 size:3258"}
{"level":"info","ts":"2025-10-14T19:22:14.044015Z","caller":"traceutil/trace.go:172","msg":"trace[19622810] range","detail":"{range_begin:/registry/pods/default/busybox-mount; range_end:; response_count:1; response_revision:871; }","duration":"135.019629ms","start":"2025-10-14T19:22:13.908981Z","end":"2025-10-14T19:22:14.044000Z","steps":["trace[19622810] 'range keys from in-memory index tree' (duration: 134.735886ms)"],"step_count":1}
==> etcd [80ea16c1a937bfda20778ff4b1ba3e117a9f5f841a14cedfd8b76113c88c131a] <==
{"level":"warn","ts":"2025-10-14T19:20:27.110534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45910","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-14T19:20:27.121436Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45936","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-14T19:20:27.129628Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45952","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-14T19:20:27.141861Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45966","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-14T19:20:27.150562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45984","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-14T19:20:27.163265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46010","server-name":"","error":"EOF"}
{"level":"warn","ts":"2025-10-14T19:20:27.246294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46024","server-name":"","error":"EOF"}
{"level":"info","ts":"2025-10-14T19:21:08.725806Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2025-10-14T19:21:08.725885Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-813066","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.244:2380"],"advertise-client-urls":["https://192.168.39.244:2379"]}
{"level":"error","ts":"2025-10-14T19:21:08.725954Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
{"level":"error","ts":"2025-10-14T19:21:08.727651Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
{"level":"info","ts":"2025-10-14T19:21:08.727855Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"38b93d7e943acb5d","current-leader-member-id":"38b93d7e943acb5d"}
{"level":"info","ts":"2025-10-14T19:21:08.727936Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
{"level":"info","ts":"2025-10-14T19:21:08.727948Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
{"level":"error","ts":"2025-10-14T19:21:08.727803Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"warn","ts":"2025-10-14T19:21:08.728265Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"warn","ts":"2025-10-14T19:21:08.728543Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"error","ts":"2025-10-14T19:21:08.728568Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"warn","ts":"2025-10-14T19:21:08.728657Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.244:2379: use of closed network connection"}
{"level":"warn","ts":"2025-10-14T19:21:08.728668Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.244:2379: use of closed network connection"}
{"level":"error","ts":"2025-10-14T19:21:08.728674Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.244:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"info","ts":"2025-10-14T19:21:08.731653Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.244:2380"}
{"level":"error","ts":"2025-10-14T19:21:08.731721Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.244:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
{"level":"info","ts":"2025-10-14T19:21:08.731773Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.244:2380"}
{"level":"info","ts":"2025-10-14T19:21:08.731778Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-813066","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.244:2380"],"advertise-client-urls":["https://192.168.39.244:2379"]}
==> kernel <==
19:22:14 up 3 min, 0 users, load average: 2.20, 0.95, 0.38
Linux functional-813066 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2025.02"
==> kube-apiserver [a5d81d9fe5fa6d7c1eb0c0d89a42ba1378252b629ba57df76f255bbb1a5c0c57] <==
I1014 19:21:19.227133 1 cache.go:39] Caches are synced for LocalAvailability controller
I1014 19:21:19.227635 1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
I1014 19:21:19.233408 1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
I1014 19:21:19.535359 1 controller.go:667] quota admission added evaluator for: serviceaccounts
I1014 19:21:20.017434 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
W1014 19:21:20.425146 1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.244]
I1014 19:21:20.426617 1 controller.go:667] quota admission added evaluator for: endpoints
I1014 19:21:20.432752 1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
I1014 19:21:20.793921 1 controller.go:667] quota admission added evaluator for: deployments.apps
I1014 19:21:20.839686 1 controller.go:667] quota admission added evaluator for: daemonsets.apps
I1014 19:21:20.873467 1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1014 19:21:20.885341 1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I1014 19:21:37.973947 1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.101.70.31"}
I1014 19:21:42.431326 1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.101.46.209"}
I1014 19:21:42.488567 1 controller.go:667] quota admission added evaluator for: replicasets.apps
I1014 19:21:45.080730 1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.96.149.235"}
I1014 19:21:52.706400 1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.111.39.25"}
E1014 19:22:04.743660 1 conn.go:339] Error on socket receive: read tcp 192.168.39.244:8441->192.168.39.1:49940: use of closed network connection
E1014 19:22:06.105561 1 conn.go:339] Error on socket receive: read tcp 192.168.39.244:8441->192.168.39.1:53992: use of closed network connection
E1014 19:22:08.356894 1 conn.go:339] Error on socket receive: read tcp 192.168.39.244:8441->192.168.39.1:54040: use of closed network connection
E1014 19:22:10.029155 1 conn.go:339] Error on socket receive: read tcp 192.168.39.244:8441->192.168.39.1:54062: use of closed network connection
I1014 19:22:12.511242 1 controller.go:667] quota admission added evaluator for: namespaces
I1014 19:22:12.803887 1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.42.253"}
I1014 19:22:12.849276 1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.208.22"}
E1014 19:22:13.691391 1 conn.go:339] Error on socket receive: read tcp 192.168.39.244:8441->192.168.39.1:54132: use of closed network connection
==> kube-controller-manager [49fb2c37ba348f80bc4e5e6e169534c3be9e9a2ee25a6e79d295ad3262f5949e] <==
I1014 19:20:31.389259 1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
I1014 19:20:31.388799 1 shared_informer.go:356] "Caches are synced" controller="resource quota"
I1014 19:20:31.389577 1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
I1014 19:20:31.391236 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
I1014 19:20:31.391265 1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
I1014 19:20:31.391270 1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
I1014 19:20:31.400525 1 shared_informer.go:356] "Caches are synced" controller="GC"
I1014 19:20:31.410966 1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
I1014 19:20:31.411020 1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
I1014 19:20:31.412272 1 shared_informer.go:356] "Caches are synced" controller="deployment"
I1014 19:20:31.415698 1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
I1014 19:20:31.421672 1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
I1014 19:20:31.422126 1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
I1014 19:20:31.425990 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
I1014 19:20:31.426147 1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
I1014 19:20:31.426431 1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
I1014 19:20:31.432929 1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
I1014 19:20:31.434386 1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
I1014 19:20:31.434705 1 shared_informer.go:356] "Caches are synced" controller="stateful set"
I1014 19:20:31.434886 1 shared_informer.go:356] "Caches are synced" controller="PV protection"
I1014 19:20:31.435898 1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
I1014 19:20:31.436240 1 shared_informer.go:356] "Caches are synced" controller="endpoint"
I1014 19:20:31.436322 1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
I1014 19:20:31.437260 1 shared_informer.go:356] "Caches are synced" controller="disruption"
I1014 19:20:31.441124 1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
==> kube-controller-manager [a4ac300cdbe10e0cba5fcd0b0596a1879c53bebd8e94860eb5a8477995200e0a] <==
I1014 19:21:22.518149 1 shared_informer.go:356] "Caches are synced" controller="resource quota"
I1014 19:21:22.520520 1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
I1014 19:21:22.533920 1 shared_informer.go:356] "Caches are synced" controller="namespace"
I1014 19:21:22.536451 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
I1014 19:21:22.536647 1 shared_informer.go:356] "Caches are synced" controller="taint"
I1014 19:21:22.536727 1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
I1014 19:21:22.536800 1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-813066"
I1014 19:21:22.536831 1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
I1014 19:21:22.542349 1 shared_informer.go:356] "Caches are synced" controller="HPA"
I1014 19:21:22.545312 1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
I1014 19:21:22.558132 1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
I1014 19:21:22.558243 1 shared_informer.go:356] "Caches are synced" controller="disruption"
I1014 19:21:22.562577 1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
I1014 19:21:22.562675 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
I1014 19:21:22.562708 1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
I1014 19:21:22.562714 1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
E1014 19:22:12.632122 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1014 19:22:12.640940 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1014 19:22:12.650125 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1014 19:22:12.653155 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1014 19:22:12.664069 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1014 19:22:12.671781 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1014 19:22:12.677327 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1014 19:22:12.688615 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
E1014 19:22:12.694671 1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
==> kube-proxy [2acb0d26ece89b3cb73c86a442fda6a1293bc70a4839776c4e7f3d6ac751d654] <==
I1014 19:21:20.163794 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I1014 19:21:20.275848 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1014 19:21:20.276343 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.244"]
E1014 19:21:20.277338 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1014 19:21:20.372318 1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Perhaps ip6tables or your kernel needs to be upgraded.
>
I1014 19:21:20.372491 1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I1014 19:21:20.372601 1 server_linux.go:132] "Using iptables Proxier"
I1014 19:21:20.383534 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1014 19:21:20.384069 1 server.go:527] "Version info" version="v1.34.1"
I1014 19:21:20.384106 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1014 19:21:20.390021 1 config.go:200] "Starting service config controller"
I1014 19:21:20.390061 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1014 19:21:20.390079 1 config.go:106] "Starting endpoint slice config controller"
I1014 19:21:20.390083 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1014 19:21:20.390093 1 config.go:403] "Starting serviceCIDR config controller"
I1014 19:21:20.390096 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1014 19:21:20.391680 1 config.go:309] "Starting node config controller"
I1014 19:21:20.391708 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1014 19:21:20.391714 1 shared_informer.go:356] "Caches are synced" controller="node config"
I1014 19:21:20.491165 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
I1014 19:21:20.491291 1 shared_informer.go:356] "Caches are synced" controller="service config"
I1014 19:21:20.491308 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
==> kube-proxy [8e4ea712a1f42fec720eafa26083c8609b165225fa6692666f63b5d18f477b8c] <==
I1014 19:20:07.534081 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I1014 19:20:07.635259 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1014 19:20:07.635310 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.244"]
E1014 19:20:07.635371 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1014 19:20:07.710073 1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Perhaps ip6tables or your kernel needs to be upgraded.
>
I1014 19:20:07.710142 1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I1014 19:20:07.710167 1 server_linux.go:132] "Using iptables Proxier"
I1014 19:20:07.721769 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1014 19:20:07.722921 1 server.go:527] "Version info" version="v1.34.1"
I1014 19:20:07.723105 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1014 19:20:07.733025 1 config.go:309] "Starting node config controller"
I1014 19:20:07.733267 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1014 19:20:07.733365 1 shared_informer.go:356] "Caches are synced" controller="node config"
I1014 19:20:07.735979 1 config.go:200] "Starting service config controller"
I1014 19:20:07.736012 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1014 19:20:07.736030 1 config.go:106] "Starting endpoint slice config controller"
I1014 19:20:07.736034 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1014 19:20:07.736044 1 config.go:403] "Starting serviceCIDR config controller"
I1014 19:20:07.736047 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1014 19:20:07.836578 1 shared_informer.go:356] "Caches are synced" controller="service config"
I1014 19:20:07.836633 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
I1014 19:20:07.838275 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
==> kube-scheduler [7774b54955bca83fcce3f1e546fb4a6bbf596b9d6d4bffaada6da018eb846844] <==
I1014 19:21:16.633584 1 serving.go:386] Generated self-signed cert in-memory
W1014 19:21:19.054737 1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W1014 19:21:19.054771 1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W1014 19:21:19.054794 1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
W1014 19:21:19.054800 1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I1014 19:21:19.136885 1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
I1014 19:21:19.139290 1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1014 19:21:19.150681 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1014 19:21:19.150768 1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1014 19:21:19.152709 1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
I1014 19:21:19.152955 1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
I1014 19:21:19.250907 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
==> kube-scheduler [a5ca6625d76fa59b74b8dabafad01d23c3bd5d2acb8a9035ed05b8d0e0225b60] <==
E1014 19:20:27.991372 1 reflector.go:205] "Failed to watch" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
E1014 19:20:27.991417 1 reflector.go:205] "Failed to watch" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
E1014 19:20:28.013747 1 reflector.go:205] "Failed to watch" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1014 19:20:28.014146 1 reflector.go:205] "Failed to watch" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1014 19:20:28.014230 1 reflector.go:205] "Failed to watch" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
E1014 19:20:28.014246 1 reflector.go:205] "Failed to watch" err="csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
E1014 19:20:28.016395 1 reflector.go:205] "Failed to watch" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
E1014 19:20:28.016797 1 reflector.go:205] "Failed to watch" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1014 19:20:28.016817 1 reflector.go:205] "Failed to watch" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
E1014 19:20:28.017238 1 reflector.go:205] "Failed to watch" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1014 19:20:28.017250 1 reflector.go:205] "Failed to watch" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
E1014 19:20:28.017260 1 reflector.go:205] "Failed to watch" err="persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
E1014 19:20:28.017268 1 reflector.go:205] "Failed to watch" err="namespaces is forbidden: User \"system:kube-scheduler\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1014 19:20:28.017276 1 reflector.go:205] "Failed to watch" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
E1014 19:20:28.017490 1 reflector.go:205] "Failed to watch" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1014 19:20:28.017798 1 reflector.go:205] "Failed to watch" err="statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E1014 19:20:28.018058 1 reflector.go:205] "Failed to watch" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1014 19:20:28.020386 1 reflector.go:205] "Failed to watch" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
E1014 19:20:28.020401 1 reflector.go:205] "Failed to watch" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
I1014 19:21:13.926562 1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
I1014 19:21:13.926631 1 server.go:263] "[graceful-termination] secure server has stopped listening"
I1014 19:21:13.926657 1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
I1014 19:21:13.927115 1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
I1014 19:21:13.927300 1 server.go:265] "[graceful-termination] secure server is exiting"
E1014 19:21:13.927383 1 run.go:72] "command failed" err="finished without leader elect"
==> kubelet <==
Oct 14 19:21:52 functional-813066 kubelet[5209]: I1014 19:21:52.778534 5209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nswpz\" (UniqueName: \"kubernetes.io/projected/c2c2b3b3-77f7-4035-b725-9e4a20df0c71-kube-api-access-nswpz\") pod \"hello-node-75c85bcc94-8v96x\" (UID: \"c2c2b3b3-77f7-4035-b725-9e4a20df0c71\") " pod="default/hello-node-75c85bcc94-8v96x"
Oct 14 19:21:59 functional-813066 kubelet[5209]: I1014 19:21:59.809769 5209 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/mysql-5bb876957f-mncgc" podStartSLOduration=3.827479753 podStartE2EDuration="17.809753407s" podCreationTimestamp="2025-10-14 19:21:42 +0000 UTC" firstStartedPulling="2025-10-14 19:21:43.134823274 +0000 UTC m=+27.779107293" lastFinishedPulling="2025-10-14 19:21:57.117096941 +0000 UTC m=+41.761380947" observedRunningTime="2025-10-14 19:21:57.800981451 +0000 UTC m=+42.445265472" watchObservedRunningTime="2025-10-14 19:21:59.809753407 +0000 UTC m=+44.454037494"
Oct 14 19:22:04 functional-813066 kubelet[5209]: E1014 19:22:04.743944 5209 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 192.168.122.39:47536->192.168.122.39:10010: write tcp 192.168.122.39:47536->192.168.122.39:10010: write: broken pipe
Oct 14 19:22:06 functional-813066 kubelet[5209]: I1014 19:22:06.856589 5209 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-node-connect-7d85dfc575-7f89b" podStartSLOduration=9.145441863 podStartE2EDuration="22.856573944s" podCreationTimestamp="2025-10-14 19:21:44 +0000 UTC" firstStartedPulling="2025-10-14 19:21:45.611253955 +0000 UTC m=+30.255537973" lastFinishedPulling="2025-10-14 19:21:59.322386045 +0000 UTC m=+43.966670054" observedRunningTime="2025-10-14 19:21:59.813151076 +0000 UTC m=+44.457435103" watchObservedRunningTime="2025-10-14 19:22:06.856573944 +0000 UTC m=+51.500857967"
Oct 14 19:22:07 functional-813066 kubelet[5209]: I1014 19:22:07.856972 5209 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=2.89960299 podStartE2EDuration="17.856951438s" podCreationTimestamp="2025-10-14 19:21:50 +0000 UTC" firstStartedPulling="2025-10-14 19:21:51.1764857 +0000 UTC m=+35.820769706" lastFinishedPulling="2025-10-14 19:22:06.133834148 +0000 UTC m=+50.778118154" observedRunningTime="2025-10-14 19:22:06.857024102 +0000 UTC m=+51.501308125" watchObservedRunningTime="2025-10-14 19:22:07.856951438 +0000 UTC m=+52.501235465"
Oct 14 19:22:07 functional-813066 kubelet[5209]: I1014 19:22:07.857119 5209 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-node-75c85bcc94-8v96x" podStartSLOduration=2.09747223 podStartE2EDuration="15.857106431s" podCreationTimestamp="2025-10-14 19:21:52 +0000 UTC" firstStartedPulling="2025-10-14 19:21:53.260619382 +0000 UTC m=+37.904903388" lastFinishedPulling="2025-10-14 19:22:07.020253584 +0000 UTC m=+51.664537589" observedRunningTime="2025-10-14 19:22:07.85525901 +0000 UTC m=+52.499543037" watchObservedRunningTime="2025-10-14 19:22:07.857106431 +0000 UTC m=+52.501390458"
Oct 14 19:22:08 functional-813066 kubelet[5209]: I1014 19:22:08.304522 5209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/c7473c8f-cfa0-4800-b7d2-ba435dc3b6b5-test-volume\") pod \"busybox-mount\" (UID: \"c7473c8f-cfa0-4800-b7d2-ba435dc3b6b5\") " pod="default/busybox-mount"
Oct 14 19:22:08 functional-813066 kubelet[5209]: I1014 19:22:08.304582 5209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chkhj\" (UniqueName: \"kubernetes.io/projected/c7473c8f-cfa0-4800-b7d2-ba435dc3b6b5-kube-api-access-chkhj\") pod \"busybox-mount\" (UID: \"c7473c8f-cfa0-4800-b7d2-ba435dc3b6b5\") " pod="default/busybox-mount"
Oct 14 19:22:12 functional-813066 kubelet[5209]: I1014 19:22:12.837570 5209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/07ae4d38-3b9f-4aa3-a8b2-a544abe2f710-tmp-volume\") pod \"dashboard-metrics-scraper-77bf4d6c4c-q682n\" (UID: \"07ae4d38-3b9f-4aa3-a8b2-a544abe2f710\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-q682n"
Oct 14 19:22:12 functional-813066 kubelet[5209]: I1014 19:22:12.838055 5209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b85g7\" (UniqueName: \"kubernetes.io/projected/07ae4d38-3b9f-4aa3-a8b2-a544abe2f710-kube-api-access-b85g7\") pod \"dashboard-metrics-scraper-77bf4d6c4c-q682n\" (UID: \"07ae4d38-3b9f-4aa3-a8b2-a544abe2f710\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-q682n"
Oct 14 19:22:12 functional-813066 kubelet[5209]: I1014 19:22:12.938669 5209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l88ng\" (UniqueName: \"kubernetes.io/projected/aca0e0e4-8bdc-4316-8ec4-b02d17386444-kube-api-access-l88ng\") pod \"kubernetes-dashboard-855c9754f9-8pkvp\" (UID: \"aca0e0e4-8bdc-4316-8ec4-b02d17386444\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8pkvp"
Oct 14 19:22:12 functional-813066 kubelet[5209]: I1014 19:22:12.938800 5209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/aca0e0e4-8bdc-4316-8ec4-b02d17386444-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-8pkvp\" (UID: \"aca0e0e4-8bdc-4316-8ec4-b02d17386444\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8pkvp"
Oct 14 19:22:14 functional-813066 kubelet[5209]: I1014 19:22:14.344318 5209 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-mount" podStartSLOduration=1.585343317 podStartE2EDuration="6.3443013s" podCreationTimestamp="2025-10-14 19:22:08 +0000 UTC" firstStartedPulling="2025-10-14 19:22:08.703004666 +0000 UTC m=+53.347288676" lastFinishedPulling="2025-10-14 19:22:13.461962653 +0000 UTC m=+58.106246659" observedRunningTime="2025-10-14 19:22:14.07658048 +0000 UTC m=+58.720864506" watchObservedRunningTime="2025-10-14 19:22:14.3443013 +0000 UTC m=+58.988585320"
Oct 14 19:22:14 functional-813066 kubelet[5209]: I1014 19:22:14.454424 5209 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5mpt\" (UniqueName: \"kubernetes.io/projected/6b8f87c2-4d76-4045-b907-b23d8544109d-kube-api-access-t5mpt\") pod \"6b8f87c2-4d76-4045-b907-b23d8544109d\" (UID: \"6b8f87c2-4d76-4045-b907-b23d8544109d\") "
Oct 14 19:22:14 functional-813066 kubelet[5209]: I1014 19:22:14.454486 5209 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"mypd\" (UniqueName: \"kubernetes.io/host-path/6b8f87c2-4d76-4045-b907-b23d8544109d-pvc-e2a419af-6bbe-44ac-8897-73c8ab8b0e22\") pod \"6b8f87c2-4d76-4045-b907-b23d8544109d\" (UID: \"6b8f87c2-4d76-4045-b907-b23d8544109d\") "
Oct 14 19:22:14 functional-813066 kubelet[5209]: I1014 19:22:14.454576 5209 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b8f87c2-4d76-4045-b907-b23d8544109d-pvc-e2a419af-6bbe-44ac-8897-73c8ab8b0e22" (OuterVolumeSpecName: "mypd") pod "6b8f87c2-4d76-4045-b907-b23d8544109d" (UID: "6b8f87c2-4d76-4045-b907-b23d8544109d"). InnerVolumeSpecName "pvc-e2a419af-6bbe-44ac-8897-73c8ab8b0e22". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
Oct 14 19:22:14 functional-813066 kubelet[5209]: I1014 19:22:14.461757 5209 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b8f87c2-4d76-4045-b907-b23d8544109d-kube-api-access-t5mpt" (OuterVolumeSpecName: "kube-api-access-t5mpt") pod "6b8f87c2-4d76-4045-b907-b23d8544109d" (UID: "6b8f87c2-4d76-4045-b907-b23d8544109d"). InnerVolumeSpecName "kube-api-access-t5mpt". PluginName "kubernetes.io/projected", VolumeGIDValue ""
Oct 14 19:22:14 functional-813066 kubelet[5209]: I1014 19:22:14.555870 5209 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t5mpt\" (UniqueName: \"kubernetes.io/projected/6b8f87c2-4d76-4045-b907-b23d8544109d-kube-api-access-t5mpt\") on node \"functional-813066\" DevicePath \"\""
Oct 14 19:22:14 functional-813066 kubelet[5209]: I1014 19:22:14.555908 5209 reconciler_common.go:299] "Volume detached for volume \"pvc-e2a419af-6bbe-44ac-8897-73c8ab8b0e22\" (UniqueName: \"kubernetes.io/host-path/6b8f87c2-4d76-4045-b907-b23d8544109d-pvc-e2a419af-6bbe-44ac-8897-73c8ab8b0e22\") on node \"functional-813066\" DevicePath \"\""
Oct 14 19:22:14 functional-813066 kubelet[5209]: I1014 19:22:14.913034 5209 scope.go:117] "RemoveContainer" containerID="c7f463f486005af5e49063ff53cc3d2258fb645ef96ed047c7fe6c905a1582a0"
Oct 14 19:22:14 functional-813066 kubelet[5209]: I1014 19:22:14.935331 5209 scope.go:117] "RemoveContainer" containerID="c7f463f486005af5e49063ff53cc3d2258fb645ef96ed047c7fe6c905a1582a0"
Oct 14 19:22:14 functional-813066 kubelet[5209]: E1014 19:22:14.936147 5209 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c7f463f486005af5e49063ff53cc3d2258fb645ef96ed047c7fe6c905a1582a0\": not found" containerID="c7f463f486005af5e49063ff53cc3d2258fb645ef96ed047c7fe6c905a1582a0"
Oct 14 19:22:14 functional-813066 kubelet[5209]: I1014 19:22:14.936399 5209 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c7f463f486005af5e49063ff53cc3d2258fb645ef96ed047c7fe6c905a1582a0"} err="failed to get container status \"c7f463f486005af5e49063ff53cc3d2258fb645ef96ed047c7fe6c905a1582a0\": rpc error: code = NotFound desc = an error occurred when try to find container \"c7f463f486005af5e49063ff53cc3d2258fb645ef96ed047c7fe6c905a1582a0\": not found"
Oct 14 19:22:15 functional-813066 kubelet[5209]: I1014 19:22:15.261674 5209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcfvg\" (UniqueName: \"kubernetes.io/projected/23a93082-fdff-4f0a-a6d0-f0d25f385a64-kube-api-access-pcfvg\") pod \"sp-pod\" (UID: \"23a93082-fdff-4f0a-a6d0-f0d25f385a64\") " pod="default/sp-pod"
Oct 14 19:22:15 functional-813066 kubelet[5209]: I1014 19:22:15.261725 5209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e2a419af-6bbe-44ac-8897-73c8ab8b0e22\" (UniqueName: \"kubernetes.io/host-path/23a93082-fdff-4f0a-a6d0-f0d25f385a64-pvc-e2a419af-6bbe-44ac-8897-73c8ab8b0e22\") pod \"sp-pod\" (UID: \"23a93082-fdff-4f0a-a6d0-f0d25f385a64\") " pod="default/sp-pod"
==> storage-provisioner [61e5f2fe5ea0c3c10b3f1f7e0b55c5b32bb25cd98b0db7bc9d7fbbe8100b0a85] <==
I1014 19:21:50.210765 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"e2a419af-6bbe-44ac-8897-73c8ab8b0e22", APIVersion:"v1", ResourceVersion:"716", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-e2a419af-6bbe-44ac-8897-73c8ab8b0e22
W1014 19:21:52.102094 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1014 19:21:52.113615 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1014 19:21:54.117892 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1014 19:21:54.371491 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1014 19:21:56.377701 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1014 19:21:56.385699 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1014 19:21:58.389619 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1014 19:21:58.398979 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1014 19:22:00.405055 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1014 19:22:00.414785 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1014 19:22:02.441047 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1014 19:22:02.466089 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1014 19:22:04.494646 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1014 19:22:04.510395 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1014 19:22:06.514720 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1014 19:22:06.520751 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1014 19:22:08.528558 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1014 19:22:08.543242 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1014 19:22:10.548827 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1014 19:22:10.559483 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1014 19:22:12.568744 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1014 19:22:12.591107 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1014 19:22:14.596676 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1014 19:22:14.605044 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
==> storage-provisioner [cbd90021f8f8b01b22f149593df7b3af427d8d3b0ea683fe0b88aea173b7e2f9] <==
I1014 19:21:19.979366 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F1014 19:21:19.981603 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
-- /stdout --
helpers_test.go:262: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-813066 -n functional-813066
helpers_test.go:269: (dbg) Run: kubectl --context functional-813066 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: sp-pod dashboard-metrics-scraper-77bf4d6c4c-q682n kubernetes-dashboard-855c9754f9-8pkvp
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run: kubectl --context functional-813066 describe pod sp-pod dashboard-metrics-scraper-77bf4d6c4c-q682n kubernetes-dashboard-855c9754f9-8pkvp
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-813066 describe pod sp-pod dashboard-metrics-scraper-77bf4d6c4c-q682n kubernetes-dashboard-855c9754f9-8pkvp: exit status 1 (89.535946ms)
-- stdout --
Name: sp-pod
Namespace: default
Priority: 0
Service Account: default
Node: functional-813066/192.168.39.244
Start Time: Tue, 14 Oct 2025 19:22:15 +0000
Labels: test=storage-provisioner
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Containers:
myfrontend:
Container ID:
Image: docker.io/nginx
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pcfvg (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
mypd:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: myclaim
ReadOnly: false
kube-api-access-pcfvg:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
Optional: false
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 1s default-scheduler Successfully assigned default/sp-pod to functional-813066
Normal Pulling 1s kubelet Pulling image "docker.io/nginx"
-- /stdout --
** stderr **
Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-q682n" not found
Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-8pkvp" not found
** /stderr **
helpers_test.go:287: kubectl --context functional-813066 describe pod sp-pod dashboard-metrics-scraper-77bf4d6c4c-q682n kubernetes-dashboard-855c9754f9-8pkvp: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (4.51s)