=== RUN TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd
=== CONT TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-519899 --alsologtostderr -v=1]
functional_test.go:918: output didn't produce a URL
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-519899 --alsologtostderr -v=1] ...
functional_test.go:910: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-519899 --alsologtostderr -v=1] stdout:
functional_test.go:910: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-519899 --alsologtostderr -v=1] stderr:
I0127 14:15:38.722922 499148 out.go:345] Setting OutFile to fd 1 ...
I0127 14:15:38.723055 499148 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 14:15:38.723063 499148 out.go:358] Setting ErrFile to fd 2...
I0127 14:15:38.723068 499148 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 14:15:38.723238 499148 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20321-483699/.minikube/bin
I0127 14:15:38.723484 499148 mustload.go:65] Loading cluster: functional-519899
I0127 14:15:38.723867 499148 config.go:182] Loaded profile config "functional-519899": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 14:15:38.724257 499148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 14:15:38.724308 499148 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:15:38.741345 499148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45001
I0127 14:15:38.741937 499148 main.go:141] libmachine: () Calling .GetVersion
I0127 14:15:38.742528 499148 main.go:141] libmachine: Using API Version 1
I0127 14:15:38.742555 499148 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:15:38.742940 499148 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:15:38.743210 499148 main.go:141] libmachine: (functional-519899) Calling .GetState
I0127 14:15:38.745010 499148 host.go:66] Checking if "functional-519899" exists ...
I0127 14:15:38.745462 499148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 14:15:38.745520 499148 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:15:38.761409 499148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44595
I0127 14:15:38.761989 499148 main.go:141] libmachine: () Calling .GetVersion
I0127 14:15:38.762527 499148 main.go:141] libmachine: Using API Version 1
I0127 14:15:38.762555 499148 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:15:38.762903 499148 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:15:38.763118 499148 main.go:141] libmachine: (functional-519899) Calling .DriverName
I0127 14:15:38.763287 499148 api_server.go:166] Checking apiserver status ...
I0127 14:15:38.763349 499148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 14:15:38.763384 499148 main.go:141] libmachine: (functional-519899) Calling .GetSSHHostname
I0127 14:15:38.766405 499148 main.go:141] libmachine: (functional-519899) DBG | domain functional-519899 has defined MAC address 52:54:00:7e:be:ed in network mk-functional-519899
I0127 14:15:38.766951 499148 main.go:141] libmachine: (functional-519899) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:be:ed", ip: ""} in network mk-functional-519899: {Iface:virbr1 ExpiryTime:2025-01-27 15:12:52 +0000 UTC Type:0 Mac:52:54:00:7e:be:ed Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:functional-519899 Clientid:01:52:54:00:7e:be:ed}
I0127 14:15:38.766985 499148 main.go:141] libmachine: (functional-519899) DBG | domain functional-519899 has defined IP address 192.168.39.137 and MAC address 52:54:00:7e:be:ed in network mk-functional-519899
I0127 14:15:38.767050 499148 main.go:141] libmachine: (functional-519899) Calling .GetSSHPort
I0127 14:15:38.767289 499148 main.go:141] libmachine: (functional-519899) Calling .GetSSHKeyPath
I0127 14:15:38.767447 499148 main.go:141] libmachine: (functional-519899) Calling .GetSSHUsername
I0127 14:15:38.767628 499148 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-483699/.minikube/machines/functional-519899/id_rsa Username:docker}
I0127 14:15:38.853796 499148 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4327/cgroup
W0127 14:15:38.863397 499148 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4327/cgroup: Process exited with status 1
stdout:
stderr:
I0127 14:15:38.863492 499148 ssh_runner.go:195] Run: ls
I0127 14:15:38.867776 499148 api_server.go:253] Checking apiserver healthz at https://192.168.39.137:8441/healthz ...
I0127 14:15:38.873677 499148 api_server.go:279] https://192.168.39.137:8441/healthz returned 200:
ok
W0127 14:15:38.873745 499148 out.go:270] * Enabling dashboard ...
* Enabling dashboard ...
I0127 14:15:38.873972 499148 config.go:182] Loaded profile config "functional-519899": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 14:15:38.873995 499148 addons.go:69] Setting dashboard=true in profile "functional-519899"
I0127 14:15:38.874005 499148 addons.go:238] Setting addon dashboard=true in "functional-519899"
I0127 14:15:38.874038 499148 host.go:66] Checking if "functional-519899" exists ...
I0127 14:15:38.874482 499148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 14:15:38.874629 499148 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:15:38.892003 499148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40375
I0127 14:15:38.892491 499148 main.go:141] libmachine: () Calling .GetVersion
I0127 14:15:38.893141 499148 main.go:141] libmachine: Using API Version 1
I0127 14:15:38.893175 499148 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:15:38.893547 499148 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:15:38.894280 499148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 14:15:38.894332 499148 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:15:38.916126 499148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38513
I0127 14:15:38.916640 499148 main.go:141] libmachine: () Calling .GetVersion
I0127 14:15:38.917266 499148 main.go:141] libmachine: Using API Version 1
I0127 14:15:38.917288 499148 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:15:38.918847 499148 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:15:38.919072 499148 main.go:141] libmachine: (functional-519899) Calling .GetState
I0127 14:15:38.920920 499148 main.go:141] libmachine: (functional-519899) Calling .DriverName
I0127 14:15:38.923567 499148 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0127 14:15:38.925147 499148 out.go:177] - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I0127 14:15:38.926395 499148 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0127 14:15:38.926413 499148 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0127 14:15:38.926450 499148 main.go:141] libmachine: (functional-519899) Calling .GetSSHHostname
I0127 14:15:38.931230 499148 main.go:141] libmachine: (functional-519899) DBG | domain functional-519899 has defined MAC address 52:54:00:7e:be:ed in network mk-functional-519899
I0127 14:15:38.931675 499148 main.go:141] libmachine: (functional-519899) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:be:ed", ip: ""} in network mk-functional-519899: {Iface:virbr1 ExpiryTime:2025-01-27 15:12:52 +0000 UTC Type:0 Mac:52:54:00:7e:be:ed Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:functional-519899 Clientid:01:52:54:00:7e:be:ed}
I0127 14:15:38.931715 499148 main.go:141] libmachine: (functional-519899) DBG | domain functional-519899 has defined IP address 192.168.39.137 and MAC address 52:54:00:7e:be:ed in network mk-functional-519899
I0127 14:15:38.931859 499148 main.go:141] libmachine: (functional-519899) Calling .GetSSHPort
I0127 14:15:38.932078 499148 main.go:141] libmachine: (functional-519899) Calling .GetSSHKeyPath
I0127 14:15:38.932222 499148 main.go:141] libmachine: (functional-519899) Calling .GetSSHUsername
I0127 14:15:38.932327 499148 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-483699/.minikube/machines/functional-519899/id_rsa Username:docker}
I0127 14:15:39.090314 499148 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0127 14:15:39.090374 499148 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0127 14:15:39.115522 499148 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0127 14:15:39.115551 499148 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0127 14:15:39.137485 499148 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0127 14:15:39.137517 499148 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0127 14:15:39.156191 499148 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0127 14:15:39.156221 499148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I0127 14:15:39.173355 499148 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0127 14:15:39.173394 499148 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0127 14:15:39.191698 499148 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0127 14:15:39.191736 499148 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0127 14:15:39.209754 499148 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0127 14:15:39.209788 499148 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0127 14:15:39.227581 499148 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0127 14:15:39.227613 499148 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0127 14:15:39.244668 499148 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0127 14:15:39.244730 499148 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0127 14:15:39.261750 499148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0127 14:15:40.413853 499148 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.152040692s)
I0127 14:15:40.413946 499148 main.go:141] libmachine: Making call to close driver server
I0127 14:15:40.413970 499148 main.go:141] libmachine: (functional-519899) Calling .Close
I0127 14:15:40.414344 499148 main.go:141] libmachine: Successfully made call to close driver server
I0127 14:15:40.414404 499148 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 14:15:40.414416 499148 main.go:141] libmachine: Making call to close driver server
I0127 14:15:40.414425 499148 main.go:141] libmachine: (functional-519899) Calling .Close
I0127 14:15:40.414366 499148 main.go:141] libmachine: (functional-519899) DBG | Closing plugin on server side
I0127 14:15:40.414725 499148 main.go:141] libmachine: Successfully made call to close driver server
I0127 14:15:40.414747 499148 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 14:15:40.414748 499148 main.go:141] libmachine: (functional-519899) DBG | Closing plugin on server side
I0127 14:15:40.416626 499148 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p functional-519899 addons enable metrics-server
I0127 14:15:40.418482 499148 addons.go:201] Writing out "functional-519899" config to set dashboard=true...
W0127 14:15:40.418794 499148 out.go:270] * Verifying dashboard health ...
* Verifying dashboard health ...
I0127 14:15:40.419653 499148 kapi.go:59] client config for functional-519899: &rest.Config{Host:"https://192.168.39.137:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20321-483699/.minikube/profiles/functional-519899/client.crt", KeyFile:"/home/jenkins/minikube-integration/20321-483699/.minikube/profiles/functional-519899/client.key", CAFile:"/home/jenkins/minikube-integration/20321-483699/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243c3e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0127 14:15:40.430047 499148 service.go:214] Found service: &Service{ObjectMeta:{kubernetes-dashboard kubernetes-dashboard 346761f1-3c5c-4b64-a594-07e84a1a22ea 812 0 2025-01-27 14:15:40 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-01-27 14:15:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.106.84.190,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.106.84.190],IPFamilies:[IPv4],AllocateLoadBalance
rNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W0127 14:15:40.430197 499148 out.go:270] * Launching proxy ...
* Launching proxy ...
I0127 14:15:40.430282 499148 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-519899 proxy --port 36195]
I0127 14:15:40.430615 499148 dashboard.go:157] Waiting for kubectl to output host:port ...
I0127 14:15:40.480537 499148 out.go:201]
W0127 14:15:40.481919 499148 out.go:270] X Exiting due to HOST_KUBECTL_PROXY: kubectl proxy: readByteWithTimeout: EOF
X Exiting due to HOST_KUBECTL_PROXY: kubectl proxy: readByteWithTimeout: EOF
W0127 14:15:40.481937 499148 out.go:270] *
*
W0127 14:15:40.485316 499148 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ * Please also attach the following file to the GitHub issue: │
│ * - /tmp/minikube_dashboard_2f9e80c8c4dc47927ad6915561a20c5705c3b3b4_0.log │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ * Please also attach the following file to the GitHub issue: │
│ * - /tmp/minikube_dashboard_2f9e80c8c4dc47927ad6915561a20c5705c3b3b4_0.log │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0127 14:15:40.486748 499148 out.go:201]
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p functional-519899 -n functional-519899
helpers_test.go:244: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p functional-519899 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-519899 logs -n 25: (2.005516421s)
helpers_test.go:252: TestFunctional/parallel/DashboardCmd logs:
-- stdout --
==> Audit <==
|-----------|-------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|-----------|-------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
| ssh | functional-519899 ssh findmnt | functional-519899 | jenkins | v1.35.0 | 27 Jan 25 14:15 UTC | |
| | -T /mount-9p | grep 9p | | | | | |
| mount | -p functional-519899 | functional-519899 | jenkins | v1.35.0 | 27 Jan 25 14:15 UTC | |
| | /tmp/TestFunctionalparallelMountCmdany-port1769431418/001:/mount-9p | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| image | functional-519899 image ls | functional-519899 | jenkins | v1.35.0 | 27 Jan 25 14:15 UTC | 27 Jan 25 14:15 UTC |
| ssh | functional-519899 ssh sudo cat | functional-519899 | jenkins | v1.35.0 | 27 Jan 25 14:15 UTC | 27 Jan 25 14:15 UTC |
| | /etc/ssl/certs/491036.pem | | | | | |
| ssh | functional-519899 ssh sudo cat | functional-519899 | jenkins | v1.35.0 | 27 Jan 25 14:15 UTC | 27 Jan 25 14:15 UTC |
| | /usr/share/ca-certificates/491036.pem | | | | | |
| ssh | functional-519899 ssh findmnt | functional-519899 | jenkins | v1.35.0 | 27 Jan 25 14:15 UTC | 27 Jan 25 14:15 UTC |
| | -T /mount-9p | grep 9p | | | | | |
| ssh | functional-519899 ssh sudo cat | functional-519899 | jenkins | v1.35.0 | 27 Jan 25 14:15 UTC | 27 Jan 25 14:15 UTC |
| | /etc/ssl/certs/51391683.0 | | | | | |
| ssh | functional-519899 ssh -- ls | functional-519899 | jenkins | v1.35.0 | 27 Jan 25 14:15 UTC | 27 Jan 25 14:15 UTC |
| | -la /mount-9p | | | | | |
| ssh | functional-519899 ssh sudo cat | functional-519899 | jenkins | v1.35.0 | 27 Jan 25 14:15 UTC | 27 Jan 25 14:15 UTC |
| | /etc/ssl/certs/4910362.pem | | | | | |
| ssh | functional-519899 ssh cat | functional-519899 | jenkins | v1.35.0 | 27 Jan 25 14:15 UTC | 27 Jan 25 14:15 UTC |
| | /mount-9p/test-1737987331440322228 | | | | | |
| ssh | functional-519899 ssh sudo cat | functional-519899 | jenkins | v1.35.0 | 27 Jan 25 14:15 UTC | 27 Jan 25 14:15 UTC |
| | /usr/share/ca-certificates/4910362.pem | | | | | |
| ssh | functional-519899 ssh sudo cat | functional-519899 | jenkins | v1.35.0 | 27 Jan 25 14:15 UTC | 27 Jan 25 14:15 UTC |
| | /etc/ssl/certs/3ec20f2e.0 | | | | | |
| ssh | functional-519899 ssh sudo cat | functional-519899 | jenkins | v1.35.0 | 27 Jan 25 14:15 UTC | 27 Jan 25 14:15 UTC |
| | /etc/test/nested/copy/491036/hosts | | | | | |
| start | -p functional-519899 | functional-519899 | jenkins | v1.35.0 | 27 Jan 25 14:15 UTC | |
| | --dry-run --memory | | | | | |
| | 250MB --alsologtostderr | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| start | -p functional-519899 | functional-519899 | jenkins | v1.35.0 | 27 Jan 25 14:15 UTC | |
| | --dry-run --alsologtostderr | | | | | |
| | -v=1 --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| start | -p functional-519899 | functional-519899 | jenkins | v1.35.0 | 27 Jan 25 14:15 UTC | |
| | --dry-run --memory | | | | | |
| | 250MB --alsologtostderr | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| image | functional-519899 image load --daemon | functional-519899 | jenkins | v1.35.0 | 27 Jan 25 14:15 UTC | 27 Jan 25 14:15 UTC |
| | kicbase/echo-server:functional-519899 | | | | | |
| | --alsologtostderr | | | | | |
| image | functional-519899 image ls | functional-519899 | jenkins | v1.35.0 | 27 Jan 25 14:15 UTC | 27 Jan 25 14:15 UTC |
| image | functional-519899 image save kicbase/echo-server:functional-519899 | functional-519899 | jenkins | v1.35.0 | 27 Jan 25 14:15 UTC | 27 Jan 25 14:15 UTC |
| | /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar | | | | | |
| | --alsologtostderr | | | | | |
| image | functional-519899 image rm | functional-519899 | jenkins | v1.35.0 | 27 Jan 25 14:15 UTC | 27 Jan 25 14:15 UTC |
| | kicbase/echo-server:functional-519899 | | | | | |
| | --alsologtostderr | | | | | |
| image | functional-519899 image ls | functional-519899 | jenkins | v1.35.0 | 27 Jan 25 14:15 UTC | 27 Jan 25 14:15 UTC |
| image | functional-519899 image load | functional-519899 | jenkins | v1.35.0 | 27 Jan 25 14:15 UTC | 27 Jan 25 14:15 UTC |
| | /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar | | | | | |
| | --alsologtostderr | | | | | |
| image | functional-519899 image ls | functional-519899 | jenkins | v1.35.0 | 27 Jan 25 14:15 UTC | 27 Jan 25 14:15 UTC |
| image | functional-519899 image save --daemon | functional-519899 | jenkins | v1.35.0 | 27 Jan 25 14:15 UTC | 27 Jan 25 14:15 UTC |
| | kicbase/echo-server:functional-519899 | | | | | |
| | --alsologtostderr | | | | | |
| dashboard | --url --port 36195 | functional-519899 | jenkins | v1.35.0 | 27 Jan 25 14:15 UTC | |
| | -p functional-519899 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
|-----------|-------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2025/01/27 14:15:33
Running on machine: ubuntu-20-agent-7
Binary: Built with gc go1.23.4 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0127 14:15:33.855779 498871 out.go:345] Setting OutFile to fd 1 ...
I0127 14:15:33.855890 498871 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 14:15:33.855902 498871 out.go:358] Setting ErrFile to fd 2...
I0127 14:15:33.855908 498871 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 14:15:33.856175 498871 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20321-483699/.minikube/bin
I0127 14:15:33.856747 498871 out.go:352] Setting JSON to false
I0127 14:15:33.857773 498871 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":14282,"bootTime":1737973052,"procs":241,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0127 14:15:33.857849 498871 start.go:139] virtualization: kvm guest
I0127 14:15:33.860051 498871 out.go:177] * [functional-519899] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
I0127 14:15:33.861491 498871 out.go:177] - MINIKUBE_LOCATION=20321
I0127 14:15:33.861513 498871 notify.go:220] Checking for updates...
I0127 14:15:33.864184 498871 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0127 14:15:33.865390 498871 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20321-483699/kubeconfig
I0127 14:15:33.866559 498871 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20321-483699/.minikube
I0127 14:15:33.867781 498871 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0127 14:15:33.868909 498871 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0127 14:15:33.870723 498871 config.go:182] Loaded profile config "functional-519899": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 14:15:33.871108 498871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 14:15:33.871189 498871 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:15:33.887261 498871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38967
I0127 14:15:33.887730 498871 main.go:141] libmachine: () Calling .GetVersion
I0127 14:15:33.888303 498871 main.go:141] libmachine: Using API Version 1
I0127 14:15:33.888323 498871 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:15:33.888701 498871 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:15:33.888886 498871 main.go:141] libmachine: (functional-519899) Calling .DriverName
I0127 14:15:33.889195 498871 driver.go:394] Setting default libvirt URI to qemu:///system
I0127 14:15:33.889495 498871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 14:15:33.889535 498871 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:15:33.905438 498871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46349
I0127 14:15:33.905881 498871 main.go:141] libmachine: () Calling .GetVersion
I0127 14:15:33.906378 498871 main.go:141] libmachine: Using API Version 1
I0127 14:15:33.906409 498871 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:15:33.906711 498871 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:15:33.906892 498871 main.go:141] libmachine: (functional-519899) Calling .DriverName
I0127 14:15:33.942169 498871 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
I0127 14:15:33.943335 498871 start.go:297] selected driver: kvm2
I0127 14:15:33.943346 498871 start.go:901] validating driver "kvm2" against &{Name:functional-519899 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-519899 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.137 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minik
ube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0127 14:15:33.943448 498871 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0127 14:15:33.945369 498871 out.go:201]
W0127 14:15:33.946502 498871 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
I0127 14:15:33.947718 498871 out.go:201]
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
ca7980858410d 9bea9f2796e23 1 second ago Running myfrontend 0 85df2b7f12738 sp-pod
fff6187880a5c 82e4c8a736a4f 18 seconds ago Running echoserver 0 a2c4307764681 hello-node-connect-58f9cf68d8-dzvt5
4a8f84d776cb6 82e4c8a736a4f 18 seconds ago Running echoserver 0 469f088145ca4 hello-node-fcfd88b6f-ph7x4
741e2b57791f2 6e38f40d628db 38 seconds ago Running storage-provisioner 4 d6a43a83e09ac storage-provisioner
a222c673336ad 6e38f40d628db 49 seconds ago Exited storage-provisioner 3 d6a43a83e09ac storage-provisioner
f4be7e8aeca19 e29f9c7391fd9 49 seconds ago Running kube-proxy 2 75ef3881a3df6 kube-proxy-vntzg
7f06a7696c358 c69fa2e9cbf5f 49 seconds ago Running coredns 2 11dd732b41f00 coredns-668d6bf9bc-jghnv
282819135765c 95c0bda56fc4d 53 seconds ago Running kube-apiserver 0 af304093a19da kube-apiserver-functional-519899
f74175cfd4f74 2b0d6572d062c 53 seconds ago Running kube-scheduler 2 928139d9847a6 kube-scheduler-functional-519899
e98ee85ad6055 019ee182b58e2 53 seconds ago Running kube-controller-manager 2 57fce32a8ba3e kube-controller-manager-functional-519899
ed421f8a47a1e a9e7e6b294baf 53 seconds ago Running etcd 2 1bb0bb8f6a1c5 etcd-functional-519899
d484539158587 019ee182b58e2 About a minute ago Exited kube-controller-manager 1 57fce32a8ba3e kube-controller-manager-functional-519899
16105ddecb22b 2b0d6572d062c About a minute ago Exited kube-scheduler 1 928139d9847a6 kube-scheduler-functional-519899
0cc5248f36c04 a9e7e6b294baf About a minute ago Exited etcd 1 1bb0bb8f6a1c5 etcd-functional-519899
9e28ff4b65aa1 c69fa2e9cbf5f About a minute ago Exited coredns 1 11dd732b41f00 coredns-668d6bf9bc-jghnv
407e9934802a4 e29f9c7391fd9 About a minute ago Exited kube-proxy 1 75ef3881a3df6 kube-proxy-vntzg
==> containerd <==
Jan 27 14:15:35 functional-519899 containerd[3544]: time="2025-01-27T14:15:35.917440023Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-519899\""
Jan 27 14:15:35 functional-519899 containerd[3544]: time="2025-01-27T14:15:35.920239529Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-519899\""
Jan 27 14:15:35 functional-519899 containerd[3544]: time="2025-01-27T14:15:35.922725818Z" level=info msg="ImageDelete event name:\"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30\""
Jan 27 14:15:35 functional-519899 containerd[3544]: time="2025-01-27T14:15:35.932721825Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-519899\" returns successfully"
Jan 27 14:15:36 functional-519899 containerd[3544]: time="2025-01-27T14:15:36.182497274Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-519899\""
Jan 27 14:15:36 functional-519899 containerd[3544]: time="2025-01-27T14:15:36.190248574Z" level=info msg="ImageCreate event name:\"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 27 14:15:36 functional-519899 containerd[3544]: time="2025-01-27T14:15:36.190828370Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-519899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 27 14:15:37 functional-519899 containerd[3544]: time="2025-01-27T14:15:37.215129212Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-519899\""
Jan 27 14:15:37 functional-519899 containerd[3544]: time="2025-01-27T14:15:37.229583775Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-519899\" returns successfully"
Jan 27 14:15:37 functional-519899 containerd[3544]: time="2025-01-27T14:15:37.229856542Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-519899\""
Jan 27 14:15:37 functional-519899 containerd[3544]: time="2025-01-27T14:15:37.229999057Z" level=info msg="ImageDelete event name:\"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30\""
Jan 27 14:15:37 functional-519899 containerd[3544]: time="2025-01-27T14:15:37.887463497Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-519899\""
Jan 27 14:15:37 functional-519899 containerd[3544]: time="2025-01-27T14:15:37.891685622Z" level=info msg="ImageCreate event name:\"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 27 14:15:37 functional-519899 containerd[3544]: time="2025-01-27T14:15:37.892128541Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-519899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 27 14:15:40 functional-519899 containerd[3544]: time="2025-01-27T14:15:40.208862331Z" level=info msg="ImageCreate event name:\"docker.io/library/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 27 14:15:40 functional-519899 containerd[3544]: time="2025-01-27T14:15:40.216268695Z" level=info msg="stop pulling image docker.io/library/nginx:latest: active requests=0, bytes read=72091372"
Jan 27 14:15:40 functional-519899 containerd[3544]: time="2025-01-27T14:15:40.221197069Z" level=info msg="ImageCreate event name:\"sha256:9bea9f2796e236cb18c2b3ad561ff29f655d1001f9ec7247a0bc5e08d25652a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 27 14:15:40 functional-519899 containerd[3544]: time="2025-01-27T14:15:40.229579400Z" level=info msg="ImageCreate event name:\"docker.io/library/nginx@sha256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 27 14:15:40 functional-519899 containerd[3544]: time="2025-01-27T14:15:40.236587174Z" level=info msg="Pulled image \"docker.io/nginx:latest\" with image id \"sha256:9bea9f2796e236cb18c2b3ad561ff29f655d1001f9ec7247a0bc5e08d25652a1\", repo tag \"docker.io/library/nginx:latest\", repo digest \"docker.io/library/nginx@sha256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a\", size \"72080558\" in 13.2274246s"
Jan 27 14:15:40 functional-519899 containerd[3544]: time="2025-01-27T14:15:40.236657461Z" level=info msg="PullImage \"docker.io/nginx:latest\" returns image reference \"sha256:9bea9f2796e236cb18c2b3ad561ff29f655d1001f9ec7247a0bc5e08d25652a1\""
Jan 27 14:15:40 functional-519899 containerd[3544]: time="2025-01-27T14:15:40.245944401Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
Jan 27 14:15:40 functional-519899 containerd[3544]: time="2025-01-27T14:15:40.247717897Z" level=info msg="CreateContainer within sandbox \"85df2b7f12738b5015b566578de43370ff595be53bae8c4b3b8de67a7394d790\" for container &ContainerMetadata{Name:myfrontend,Attempt:0,}"
Jan 27 14:15:40 functional-519899 containerd[3544]: time="2025-01-27T14:15:40.293535520Z" level=info msg="CreateContainer within sandbox \"85df2b7f12738b5015b566578de43370ff595be53bae8c4b3b8de67a7394d790\" for &ContainerMetadata{Name:myfrontend,Attempt:0,} returns container id \"ca7980858410de6a0c152a4e6a4926486c4a14f1111f4521a7942b3f67e30337\""
Jan 27 14:15:40 functional-519899 containerd[3544]: time="2025-01-27T14:15:40.294460557Z" level=info msg="StartContainer for \"ca7980858410de6a0c152a4e6a4926486c4a14f1111f4521a7942b3f67e30337\""
Jan 27 14:15:40 functional-519899 containerd[3544]: time="2025-01-27T14:15:40.396853443Z" level=info msg="StartContainer for \"ca7980858410de6a0c152a4e6a4926486c4a14f1111f4521a7942b3f67e30337\" returns successfully"
==> coredns [7f06a7696c35892350d986f6ca4c5539a80c135ac380e7b95728d45b7fa2f78e] <==
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
.:53
[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
CoreDNS-1.11.3
linux/amd64, go1.21.11, a6338e9
[INFO] 127.0.0.1:42270 - 16175 "HINFO IN 737029073090806261.4182980186629237895. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.159662686s
==> coredns [9e28ff4b65aa1b5fab470c6ea6f44ccc628f999e78ef7c106ae1466423c265f7] <==
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
CoreDNS-1.11.3
linux/amd64, go1.21.11, a6338e9
[INFO] 127.0.0.1:58078 - 36221 "HINFO IN 7436575724398093421.2335735018293160571. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021117483s
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
==> describe nodes <==
Name: functional-519899
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=functional-519899
kubernetes.io/os=linux
minikube.k8s.io/commit=f6a4bc0699f1a012c34860b426fc47f95a8e8743
minikube.k8s.io/name=functional-519899
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_01_27T14_13_20_0700
minikube.k8s.io/version=v1.35.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 27 Jan 2025 14:13:17 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: functional-519899
AcquireTime: <unset>
RenewTime: Mon, 27 Jan 2025 14:15:31 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 27 Jan 2025 14:14:51 +0000 Mon, 27 Jan 2025 14:13:15 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 27 Jan 2025 14:14:51 +0000 Mon, 27 Jan 2025 14:13:15 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 27 Jan 2025 14:14:51 +0000 Mon, 27 Jan 2025 14:13:15 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 27 Jan 2025 14:14:51 +0000 Mon, 27 Jan 2025 14:13:20 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.137
Hostname: functional-519899
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 3912788Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 3912788Ki
pods: 110
System Info:
Machine ID: a26fc175814d45c6af835d6f84a794a3
System UUID: a26fc175-814d-45c6-af83-5d6f84a794a3
Boot ID: c62c33cd-55d8-42a7-b1af-368c686d6579
Kernel Version: 5.10.207
OS Image: Buildroot 2023.02.9
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.7.23
Kubelet Version: v1.32.1
Kube-Proxy Version: v1.32.1
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (14 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox-mount 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8s
default hello-node-connect-58f9cf68d8-dzvt5 0 (0%) 0 (0%) 0 (0%) 0 (0%) 21s
default hello-node-fcfd88b6f-ph7x4 0 (0%) 0 (0%) 0 (0%) 0 (0%) 22s
default mysql-58ccfd96bb-htngb 600m (30%) 700m (35%) 512Mi (13%) 700Mi (18%) 7s
default sp-pod 0 (0%) 0 (0%) 0 (0%) 0 (0%) 15s
kube-system coredns-668d6bf9bc-jghnv 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 2m17s
kube-system etcd-functional-519899 100m (5%) 0 (0%) 100Mi (2%) 0 (0%) 2m21s
kube-system kube-apiserver-functional-519899 250m (12%) 0 (0%) 0 (0%) 0 (0%) 50s
kube-system kube-controller-manager-functional-519899 200m (10%) 0 (0%) 0 (0%) 0 (0%) 2m21s
kube-system kube-proxy-vntzg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m17s
kube-system kube-scheduler-functional-519899 100m (5%) 0 (0%) 0 (0%) 0 (0%) 2m23s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m15s
kubernetes-dashboard dashboard-metrics-scraper-5d59dccf9b-nz2b5 0 (0%) 0 (0%) 0 (0%) 0 (0%) 1s
kubernetes-dashboard kubernetes-dashboard-7779f9b69b-xww99 0 (0%) 0 (0%) 0 (0%) 0 (0%) 1s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 1350m (67%) 700m (35%)
memory 682Mi (17%) 870Mi (22%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 2m15s kube-proxy
Normal Starting 49s kube-proxy
Normal Starting 99s kube-proxy
Normal Starting 2m22s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 2m21s kubelet Node functional-519899 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 2m21s kubelet Node functional-519899 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2m21s kubelet Node functional-519899 status is now: NodeHasSufficientPID
Normal NodeReady 2m21s kubelet Node functional-519899 status is now: NodeReady
Normal NodeAllocatableEnforced 2m21s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 2m18s node-controller Node functional-519899 event: Registered Node functional-519899 in Controller
Normal NodeHasSufficientPID 105s (x7 over 105s) kubelet Node functional-519899 status is now: NodeHasSufficientPID
Normal NodeHasSufficientMemory 105s (x8 over 105s) kubelet Node functional-519899 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 105s (x8 over 105s) kubelet Node functional-519899 status is now: NodeHasNoDiskPressure
Normal Starting 105s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 105s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 99s node-controller Node functional-519899 event: Registered Node functional-519899 in Controller
Normal Starting 54s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 54s (x8 over 54s) kubelet Node functional-519899 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 54s (x8 over 54s) kubelet Node functional-519899 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 54s (x7 over 54s) kubelet Node functional-519899 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 54s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 47s node-controller Node functional-519899 event: Registered Node functional-519899 in Controller
==> dmesg <==
[ +0.317655] systemd-fstab-generator[2171]: Ignoring "noauto" option for root device
[ +0.084276] kauditd_printk_skb: 88 callbacks suppressed
[ +1.530275] systemd-fstab-generator[2326]: Ignoring "noauto" option for root device
[ +5.838432] kauditd_printk_skb: 40 callbacks suppressed
[ +10.151442] kauditd_printk_skb: 2 callbacks suppressed
[ +1.589419] systemd-fstab-generator[2812]: Ignoring "noauto" option for root device
[Jan27 14:14] kauditd_printk_skb: 36 callbacks suppressed
[ +13.041908] systemd-fstab-generator[3109]: Ignoring "noauto" option for root device
[ +11.717923] systemd-fstab-generator[3469]: Ignoring "noauto" option for root device
[ +0.083662] kauditd_printk_skb: 14 callbacks suppressed
[ +0.071094] systemd-fstab-generator[3481]: Ignoring "noauto" option for root device
[ +0.186325] systemd-fstab-generator[3495]: Ignoring "noauto" option for root device
[ +0.140064] systemd-fstab-generator[3507]: Ignoring "noauto" option for root device
[ +0.295330] systemd-fstab-generator[3536]: Ignoring "noauto" option for root device
[ +1.839889] systemd-fstab-generator[3698]: Ignoring "noauto" option for root device
[ +10.893667] kauditd_printk_skb: 125 callbacks suppressed
[ +6.527176] systemd-fstab-generator[4114]: Ignoring "noauto" option for root device
[ +4.254109] kauditd_printk_skb: 39 callbacks suppressed
[Jan27 14:15] kauditd_printk_skb: 15 callbacks suppressed
[ +4.928007] systemd-fstab-generator[4675]: Ignoring "noauto" option for root device
[ +6.693240] kauditd_printk_skb: 12 callbacks suppressed
[ +5.000419] kauditd_printk_skb: 24 callbacks suppressed
[ +6.678643] kauditd_printk_skb: 27 callbacks suppressed
[ +6.589000] kauditd_printk_skb: 2 callbacks suppressed
[ +6.918214] kauditd_printk_skb: 18 callbacks suppressed
==> etcd [0cc5248f36c0429b2e0c6fa9eeb92986bc137c8d5b795c6bb1129aea2312e9a0] <==
{"level":"info","ts":"2025-01-27T14:13:58.434273Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a became pre-candidate at term 2"}
{"level":"info","ts":"2025-01-27T14:13:58.434356Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a received MsgPreVoteResp from 5527995f6263874a at term 2"}
{"level":"info","ts":"2025-01-27T14:13:58.434417Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a became candidate at term 3"}
{"level":"info","ts":"2025-01-27T14:13:58.434437Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a received MsgVoteResp from 5527995f6263874a at term 3"}
{"level":"info","ts":"2025-01-27T14:13:58.434453Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a became leader at term 3"}
{"level":"info","ts":"2025-01-27T14:13:58.434505Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 5527995f6263874a elected leader 5527995f6263874a at term 3"}
{"level":"info","ts":"2025-01-27T14:13:58.437120Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"5527995f6263874a","local-member-attributes":"{Name:functional-519899 ClientURLs:[https://192.168.39.137:2379]}","request-path":"/0/members/5527995f6263874a/attributes","cluster-id":"8623b2a8b011233f","publish-timeout":"7s"}
{"level":"info","ts":"2025-01-27T14:13:58.437394Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2025-01-27T14:13:58.437469Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2025-01-27T14:13:58.437632Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2025-01-27T14:13:58.437783Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2025-01-27T14:13:58.438552Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2025-01-27T14:13:58.438566Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2025-01-27T14:13:58.439358Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2025-01-27T14:13:58.439366Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.137:2379"}
{"level":"info","ts":"2025-01-27T14:14:40.837074Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2025-01-27T14:14:40.837111Z","caller":"embed/etcd.go:378","msg":"closing etcd server","name":"functional-519899","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.137:2380"],"advertise-client-urls":["https://192.168.39.137:2379"]}
{"level":"warn","ts":"2025-01-27T14:14:40.837208Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"warn","ts":"2025-01-27T14:14:40.837234Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"warn","ts":"2025-01-27T14:14:40.838853Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.137:2379: use of closed network connection"}
{"level":"warn","ts":"2025-01-27T14:14:40.838954Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.137:2379: use of closed network connection"}
{"level":"info","ts":"2025-01-27T14:14:40.839017Z","caller":"etcdserver/server.go:1543","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"5527995f6263874a","current-leader-member-id":"5527995f6263874a"}
{"level":"info","ts":"2025-01-27T14:14:40.842975Z","caller":"embed/etcd.go:582","msg":"stopping serving peer traffic","address":"192.168.39.137:2380"}
{"level":"info","ts":"2025-01-27T14:14:40.843189Z","caller":"embed/etcd.go:587","msg":"stopped serving peer traffic","address":"192.168.39.137:2380"}
{"level":"info","ts":"2025-01-27T14:14:40.843214Z","caller":"embed/etcd.go:380","msg":"closed etcd server","name":"functional-519899","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.137:2380"],"advertise-client-urls":["https://192.168.39.137:2379"]}
==> etcd [ed421f8a47a1e10ea213aec6181fecb128764248a1f249afef93b17940bcfe5b] <==
{"level":"info","ts":"2025-01-27T14:14:50.011938Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a is starting a new election at term 3"}
{"level":"info","ts":"2025-01-27T14:14:50.011997Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a became pre-candidate at term 3"}
{"level":"info","ts":"2025-01-27T14:14:50.012030Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a received MsgPreVoteResp from 5527995f6263874a at term 3"}
{"level":"info","ts":"2025-01-27T14:14:50.012054Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a became candidate at term 4"}
{"level":"info","ts":"2025-01-27T14:14:50.012062Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a received MsgVoteResp from 5527995f6263874a at term 4"}
{"level":"info","ts":"2025-01-27T14:14:50.012070Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a became leader at term 4"}
{"level":"info","ts":"2025-01-27T14:14:50.012076Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 5527995f6263874a elected leader 5527995f6263874a at term 4"}
{"level":"info","ts":"2025-01-27T14:14:50.014046Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"5527995f6263874a","local-member-attributes":"{Name:functional-519899 ClientURLs:[https://192.168.39.137:2379]}","request-path":"/0/members/5527995f6263874a/attributes","cluster-id":"8623b2a8b011233f","publish-timeout":"7s"}
{"level":"info","ts":"2025-01-27T14:14:50.014086Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2025-01-27T14:14:50.014419Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2025-01-27T14:14:50.014491Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2025-01-27T14:14:50.014522Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2025-01-27T14:14:50.015036Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2025-01-27T14:14:50.015179Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2025-01-27T14:14:50.015869Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2025-01-27T14:14:50.016117Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.137:2379"}
{"level":"info","ts":"2025-01-27T14:15:39.532896Z","caller":"traceutil/trace.go:171","msg":"trace[1772122477] linearizableReadLoop","detail":"{readStateIndex:829; appliedIndex:828; }","duration":"429.826216ms","start":"2025-01-27T14:15:39.103036Z","end":"2025-01-27T14:15:39.532862Z","steps":["trace[1772122477] 'read index received' (duration: 429.639326ms)","trace[1772122477] 'applied index is now lower than readState.Index' (duration: 186.477µs)"],"step_count":2}
{"level":"info","ts":"2025-01-27T14:15:39.533058Z","caller":"traceutil/trace.go:171","msg":"trace[2121414342] transaction","detail":"{read_only:false; response_revision:756; number_of_response:1; }","duration":"434.818863ms","start":"2025-01-27T14:15:39.098233Z","end":"2025-01-27T14:15:39.533052Z","steps":["trace[2121414342] 'process raft request' (duration: 434.471527ms)"],"step_count":1}
{"level":"warn","ts":"2025-01-27T14:15:39.533865Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"406.424653ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-01-27T14:15:39.533935Z","caller":"traceutil/trace.go:171","msg":"trace[1200541576] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:756; }","duration":"407.202409ms","start":"2025-01-27T14:15:39.126700Z","end":"2025-01-27T14:15:39.533902Z","steps":["trace[1200541576] 'agreement among raft nodes before linearized reading' (duration: 406.426236ms)"],"step_count":1}
{"level":"warn","ts":"2025-01-27T14:15:39.533964Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T14:15:39.126685Z","time spent":"407.270004ms","remote":"127.0.0.1:57394","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
{"level":"warn","ts":"2025-01-27T14:15:39.534132Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"431.092058ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-01-27T14:15:39.534153Z","caller":"traceutil/trace.go:171","msg":"trace[1226083313] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:756; }","duration":"431.130678ms","start":"2025-01-27T14:15:39.103014Z","end":"2025-01-27T14:15:39.534145Z","steps":["trace[1226083313] 'agreement among raft nodes before linearized reading' (duration: 431.093735ms)"],"step_count":1}
{"level":"warn","ts":"2025-01-27T14:15:39.534167Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T14:15:39.103002Z","time spent":"431.161833ms","remote":"127.0.0.1:57394","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
{"level":"warn","ts":"2025-01-27T14:15:39.535022Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T14:15:39.098215Z","time spent":"434.86301ms","remote":"127.0.0.1:57368","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:755 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
==> kernel <==
14:15:42 up 3 min, 0 users, load average: 0.72, 0.43, 0.18
Linux functional-519899 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2023.02.9"
==> kube-apiserver [282819135765c81958b2c63cc44b4e9218ba2aaa14383527f00985ad1a362269] <==
I0127 14:14:51.274215 1 autoregister_controller.go:144] Starting autoregister controller
I0127 14:14:51.274219 1 cache.go:32] Waiting for caches to sync for autoregister controller
I0127 14:14:51.274224 1 cache.go:39] Caches are synced for autoregister controller
I0127 14:14:51.274682 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0127 14:14:51.275046 1 cache.go:39] Caches are synced for RemoteAvailability controller
I0127 14:14:51.276009 1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
I0127 14:14:51.292618 1 handler_discovery.go:451] Starting ResourceDiscoveryManager
I0127 14:14:51.298740 1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
I0127 14:14:51.531610 1 controller.go:615] quota admission added evaluator for: serviceaccounts
I0127 14:14:52.091165 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
W0127 14:14:52.286685 1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.137]
I0127 14:14:52.288134 1 controller.go:615] quota admission added evaluator for: endpoints
I0127 14:14:52.292656 1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0127 14:14:52.710318 1 controller.go:615] quota admission added evaluator for: deployments.apps
I0127 14:14:52.765380 1 controller.go:615] quota admission added evaluator for: daemonsets.apps
I0127 14:14:52.792223 1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0127 14:14:52.798419 1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0127 14:14:54.392192 1 controller.go:615] quota admission added evaluator for: replicasets.apps
I0127 14:15:15.161877 1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.104.129.163"}
I0127 14:15:19.888384 1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.99.142.149"}
I0127 14:15:20.471689 1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.96.36.209"}
I0127 14:15:34.069676 1 alloc.go:330] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.103.24.35"}
I0127 14:15:39.959310 1 controller.go:615] quota admission added evaluator for: namespaces
I0127 14:15:40.366653 1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.84.190"}
I0127 14:15:40.402424 1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.88.162"}
==> kube-controller-manager [d484539158587ce134ce60261c06c2d8bd4481b47698c606582458253761eb7f] <==
I0127 14:14:02.774955 1 shared_informer.go:320] Caches are synced for GC
I0127 14:14:02.775132 1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
I0127 14:14:02.775253 1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
I0127 14:14:02.775346 1 shared_informer.go:320] Caches are synced for taint
I0127 14:14:02.775468 1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
I0127 14:14:02.775570 1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-519899"
I0127 14:14:02.775618 1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
I0127 14:14:02.778557 1 shared_informer.go:320] Caches are synced for resource quota
I0127 14:14:02.781969 1 shared_informer.go:320] Caches are synced for HPA
I0127 14:14:02.783135 1 shared_informer.go:320] Caches are synced for resource quota
I0127 14:14:02.783270 1 shared_informer.go:320] Caches are synced for job
I0127 14:14:02.787662 1 shared_informer.go:320] Caches are synced for disruption
I0127 14:14:02.798974 1 shared_informer.go:320] Caches are synced for garbage collector
I0127 14:14:02.800132 1 shared_informer.go:320] Caches are synced for node
I0127 14:14:02.800208 1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
I0127 14:14:02.800423 1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
I0127 14:14:02.800532 1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
I0127 14:14:02.800551 1 shared_informer.go:320] Caches are synced for cidrallocator
I0127 14:14:02.800730 1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-519899"
I0127 14:14:02.814321 1 shared_informer.go:320] Caches are synced for namespace
I0127 14:14:03.136962 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="51.537418ms"
I0127 14:14:03.137397 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="364.942µs"
I0127 14:14:13.093568 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="14.626256ms"
I0127 14:14:13.094201 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="63.722µs"
I0127 14:14:30.431010 1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-519899"
==> kube-controller-manager [e98ee85ad6055058a657a71333fbf0be936a7aa7d1325e16411ae9aeb155e26d] <==
I0127 14:15:34.171082 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="25.311µs"
I0127 14:15:40.119395 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="65.237677ms"
E0127 14:15:40.119515 1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
I0127 14:15:40.130177 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="36.294686ms"
E0127 14:15:40.130331 1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
I0127 14:15:40.145851 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="24.852637ms"
E0127 14:15:40.145890 1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
I0127 14:15:40.146100 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="13.067164ms"
E0127 14:15:40.146220 1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
I0127 14:15:40.156250 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="8.545707ms"
E0127 14:15:40.156292 1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
I0127 14:15:40.158210 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="10.491623ms"
E0127 14:15:40.158245 1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
I0127 14:15:40.175941 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="17.611277ms"
E0127 14:15:40.175977 1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
I0127 14:15:40.176236 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="15.922334ms"
E0127 14:15:40.176270 1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
I0127 14:15:40.232977 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="52.343098ms"
I0127 14:15:40.283097 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="59.799247ms"
I0127 14:15:40.309912 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="76.88538ms"
I0127 14:15:40.310003 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="35.669µs"
I0127 14:15:40.321089 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="89.419µs"
I0127 14:15:40.345904 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="62.750262ms"
I0127 14:15:40.364972 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="19.03272ms"
I0127 14:15:40.365056 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="57.422µs"
==> kube-proxy [407e9934802a471e3288501bb19fe6ee487cd53d3efe3ab71663e83263f26dbc] <==
could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
add table ip6 kube-proxy
^^^^^^^^^^^^^^^^^^^^^^^^^
>
E0127 14:13:45.337535 1 server.go:687] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-519899\": dial tcp 192.168.39.137:8441: connect: connection refused"
E0127 14:13:46.376998 1 server.go:687] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-519899\": dial tcp 192.168.39.137:8441: connect: connection refused"
E0127 14:13:48.647165 1 server.go:687] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-519899\": dial tcp 192.168.39.137:8441: connect: connection refused"
E0127 14:13:53.323875 1 server.go:687] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-519899\": dial tcp 192.168.39.137:8441: connect: connection refused"
I0127 14:14:02.269377 1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.137"]
E0127 14:14:02.269453 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I0127 14:14:02.304140 1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
I0127 14:14:02.304203 1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I0127 14:14:02.304246 1 server_linux.go:170] "Using iptables Proxier"
I0127 14:14:02.307820 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I0127 14:14:02.308062 1 server.go:497] "Version info" version="v1.32.1"
I0127 14:14:02.308092 1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0127 14:14:02.309341 1 config.go:329] "Starting node config controller"
I0127 14:14:02.309368 1 shared_informer.go:313] Waiting for caches to sync for node config
I0127 14:14:02.309892 1 config.go:199] "Starting service config controller"
I0127 14:14:02.310054 1 shared_informer.go:313] Waiting for caches to sync for service config
I0127 14:14:02.310180 1 config.go:105] "Starting endpoint slice config controller"
I0127 14:14:02.310323 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0127 14:14:02.410435 1 shared_informer.go:320] Caches are synced for service config
I0127 14:14:02.410466 1 shared_informer.go:320] Caches are synced for endpoint slice config
I0127 14:14:02.410535 1 shared_informer.go:320] Caches are synced for node config
==> kube-proxy [f4be7e8aeca194b85001740a7975077559aef807e9488d684eb457cb3621108d] <==
add table ip kube-proxy
^^^^^^^^^^^^^^^^^^^^^^^^
>
E0127 14:14:52.028359 1 proxier.go:733] "Error cleaning up nftables rules" err=<
could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
add table ip6 kube-proxy
^^^^^^^^^^^^^^^^^^^^^^^^^
>
I0127 14:14:52.038514 1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.137"]
E0127 14:14:52.038988 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I0127 14:14:52.067031 1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
I0127 14:14:52.067251 1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I0127 14:14:52.067390 1 server_linux.go:170] "Using iptables Proxier"
I0127 14:14:52.069627 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I0127 14:14:52.070005 1 server.go:497] "Version info" version="v1.32.1"
I0127 14:14:52.070270 1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0127 14:14:52.071685 1 config.go:199] "Starting service config controller"
I0127 14:14:52.072014 1 shared_informer.go:313] Waiting for caches to sync for service config
I0127 14:14:52.072222 1 config.go:105] "Starting endpoint slice config controller"
I0127 14:14:52.072289 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0127 14:14:52.073012 1 config.go:329] "Starting node config controller"
I0127 14:14:52.073105 1 shared_informer.go:313] Waiting for caches to sync for node config
I0127 14:14:52.172976 1 shared_informer.go:320] Caches are synced for endpoint slice config
I0127 14:14:52.173438 1 shared_informer.go:320] Caches are synced for node config
I0127 14:14:52.173454 1 shared_informer.go:320] Caches are synced for service config
==> kube-scheduler [16105ddecb22b1e64d2fc0db6a643848d215bbc5f317d4955bbb3da76fcf4e0e] <==
I0127 14:13:57.961295 1 serving.go:386] Generated self-signed cert in-memory
W0127 14:13:59.547901 1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0127 14:13:59.547937 1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0127 14:13:59.548277 1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
W0127 14:13:59.548288 1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0127 14:13:59.614392 1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
I0127 14:13:59.616810 1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0127 14:13:59.620849 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0127 14:13:59.621096 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0127 14:13:59.623997 1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
I0127 14:13:59.624256 1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
I0127 14:13:59.722884 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
E0127 14:14:40.780134 1 run.go:72] "command failed" err="finished without leader elect"
==> kube-scheduler [f74175cfd4f749a8d18354425fcbbb2fe74a810052cfa2dcce5d05d8e17a2e81] <==
I0127 14:14:48.988602 1 serving.go:386] Generated self-signed cert in-memory
W0127 14:14:51.120098 1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0127 14:14:51.120146 1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0127 14:14:51.120164 1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
W0127 14:14:51.120362 1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0127 14:14:51.176943 1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
I0127 14:14:51.176980 1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0127 14:14:51.192071 1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
I0127 14:14:51.192633 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0127 14:14:51.193376 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0127 14:14:51.193576 1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
I0127 14:14:51.294866 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Jan 27 14:15:18 functional-519899 kubelet[4121]: I0127 14:15:18.912579 4121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4s9xp\" (UniqueName: \"kubernetes.io/projected/c03b27b5-ad89-4109-a948-264dc777db6b-kube-api-access-4s9xp\") on node \"functional-519899\" DevicePath \"\""
Jan 27 14:15:19 functional-519899 kubelet[4121]: I0127 14:15:19.922727 4121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhqn4\" (UniqueName: \"kubernetes.io/projected/2e838724-4178-4ca0-b457-87236081912b-kube-api-access-nhqn4\") pod \"hello-node-fcfd88b6f-ph7x4\" (UID: \"2e838724-4178-4ca0-b457-87236081912b\") " pod="default/hello-node-fcfd88b6f-ph7x4"
Jan 27 14:15:20 functional-519899 kubelet[4121]: I0127 14:15:20.527704 4121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnvqk\" (UniqueName: \"kubernetes.io/projected/2127b456-24b1-4d0c-a5ac-bd2698d2facb-kube-api-access-jnvqk\") pod \"hello-node-connect-58f9cf68d8-dzvt5\" (UID: \"2127b456-24b1-4d0c-a5ac-bd2698d2facb\") " pod="default/hello-node-connect-58f9cf68d8-dzvt5"
Jan 27 14:15:21 functional-519899 kubelet[4121]: I0127 14:15:21.483992 4121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03b27b5-ad89-4109-a948-264dc777db6b" path="/var/lib/kubelet/pods/c03b27b5-ad89-4109-a948-264dc777db6b/volumes"
Jan 27 14:15:23 functional-519899 kubelet[4121]: I0127 14:15:23.702112 4121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-node-fcfd88b6f-ph7x4" podStartSLOduration=2.134800251 podStartE2EDuration="4.702084734s" podCreationTimestamp="2025-01-27 14:15:19 +0000 UTC" firstStartedPulling="2025-01-27 14:15:20.320087291 +0000 UTC m=+32.962184600" lastFinishedPulling="2025-01-27 14:15:22.887371786 +0000 UTC m=+35.529469083" observedRunningTime="2025-01-27 14:15:23.68674138 +0000 UTC m=+36.328838698" watchObservedRunningTime="2025-01-27 14:15:23.702084734 +0000 UTC m=+36.344182050"
Jan 27 14:15:26 functional-519899 kubelet[4121]: I0127 14:15:26.512808 4121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-node-connect-58f9cf68d8-dzvt5" podStartSLOduration=4.786147735 podStartE2EDuration="6.512785582s" podCreationTimestamp="2025-01-27 14:15:20 +0000 UTC" firstStartedPulling="2025-01-27 14:15:21.263985875 +0000 UTC m=+33.906083176" lastFinishedPulling="2025-01-27 14:15:22.990623726 +0000 UTC m=+35.632721023" observedRunningTime="2025-01-27 14:15:23.70268508 +0000 UTC m=+36.344782396" watchObservedRunningTime="2025-01-27 14:15:26.512785582 +0000 UTC m=+39.154882893"
Jan 27 14:15:26 functional-519899 kubelet[4121]: I0127 14:15:26.676737 4121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkq47\" (UniqueName: \"kubernetes.io/projected/9ad21048-92b8-4b43-a8f3-a7c7597f7771-kube-api-access-bkq47\") pod \"sp-pod\" (UID: \"9ad21048-92b8-4b43-a8f3-a7c7597f7771\") " pod="default/sp-pod"
Jan 27 14:15:26 functional-519899 kubelet[4121]: I0127 14:15:26.676964 4121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-14595ad7-79ad-446f-a8c9-01ca94d81850\" (UniqueName: \"kubernetes.io/host-path/9ad21048-92b8-4b43-a8f3-a7c7597f7771-pvc-14595ad7-79ad-446f-a8c9-01ca94d81850\") pod \"sp-pod\" (UID: \"9ad21048-92b8-4b43-a8f3-a7c7597f7771\") " pod="default/sp-pod"
Jan 27 14:15:33 functional-519899 kubelet[4121]: I0127 14:15:33.227926 4121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frpc9\" (UniqueName: \"kubernetes.io/projected/eeb33ffb-4745-4aff-8600-d7989436ca01-kube-api-access-frpc9\") pod \"busybox-mount\" (UID: \"eeb33ffb-4745-4aff-8600-d7989436ca01\") " pod="default/busybox-mount"
Jan 27 14:15:33 functional-519899 kubelet[4121]: I0127 14:15:33.228006 4121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/eeb33ffb-4745-4aff-8600-d7989436ca01-test-volume\") pod \"busybox-mount\" (UID: \"eeb33ffb-4745-4aff-8600-d7989436ca01\") " pod="default/busybox-mount"
Jan 27 14:15:34 functional-519899 kubelet[4121]: I0127 14:15:34.235348 4121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g48t4\" (UniqueName: \"kubernetes.io/projected/3896773c-b0be-414b-901e-7b018511c481-kube-api-access-g48t4\") pod \"mysql-58ccfd96bb-htngb\" (UID: \"3896773c-b0be-414b-901e-7b018511c481\") " pod="default/mysql-58ccfd96bb-htngb"
Jan 27 14:15:40 functional-519899 kubelet[4121]: W0127 14:15:40.228515 4121 reflector.go:569] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:functional-519899" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'functional-519899' and this object
Jan 27 14:15:40 functional-519899 kubelet[4121]: E0127 14:15:40.228873 4121 reflector.go:166] "Unhandled Error" err="object-\"kubernetes-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:functional-519899\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'functional-519899' and this object" logger="UnhandledError"
Jan 27 14:15:40 functional-519899 kubelet[4121]: I0127 14:15:40.229852 4121 status_manager.go:890] "Failed to get status for pod" podUID="4448ceaf-42ef-4523-b9cf-e15b6dbb8e9a" pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-xww99" err="pods \"kubernetes-dashboard-7779f9b69b-xww99\" is forbidden: User \"system:node:functional-519899\" cannot get resource \"pods\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'functional-519899' and this object"
Jan 27 14:15:40 functional-519899 kubelet[4121]: I0127 14:15:40.380241 4121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/3d473bce-1697-4496-a82d-d858960734dd-tmp-volume\") pod \"dashboard-metrics-scraper-5d59dccf9b-nz2b5\" (UID: \"3d473bce-1697-4496-a82d-d858960734dd\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b-nz2b5"
Jan 27 14:15:40 functional-519899 kubelet[4121]: I0127 14:15:40.380277 4121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d57mv\" (UniqueName: \"kubernetes.io/projected/3d473bce-1697-4496-a82d-d858960734dd-kube-api-access-d57mv\") pod \"dashboard-metrics-scraper-5d59dccf9b-nz2b5\" (UID: \"3d473bce-1697-4496-a82d-d858960734dd\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b-nz2b5"
Jan 27 14:15:40 functional-519899 kubelet[4121]: I0127 14:15:40.380298 4121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4448ceaf-42ef-4523-b9cf-e15b6dbb8e9a-tmp-volume\") pod \"kubernetes-dashboard-7779f9b69b-xww99\" (UID: \"4448ceaf-42ef-4523-b9cf-e15b6dbb8e9a\") " pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-xww99"
Jan 27 14:15:40 functional-519899 kubelet[4121]: I0127 14:15:40.380313 4121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ggp7\" (UniqueName: \"kubernetes.io/projected/4448ceaf-42ef-4523-b9cf-e15b6dbb8e9a-kube-api-access-5ggp7\") pod \"kubernetes-dashboard-7779f9b69b-xww99\" (UID: \"4448ceaf-42ef-4523-b9cf-e15b6dbb8e9a\") " pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-xww99"
Jan 27 14:15:41 functional-519899 kubelet[4121]: E0127 14:15:41.490127 4121 projected.go:288] Couldn't get configMap kubernetes-dashboard/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
Jan 27 14:15:41 functional-519899 kubelet[4121]: E0127 14:15:41.490165 4121 projected.go:194] Error preparing data for projected volume kube-api-access-5ggp7 for pod kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-xww99: failed to sync configmap cache: timed out waiting for the condition
Jan 27 14:15:41 functional-519899 kubelet[4121]: E0127 14:15:41.490239 4121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4448ceaf-42ef-4523-b9cf-e15b6dbb8e9a-kube-api-access-5ggp7 podName:4448ceaf-42ef-4523-b9cf-e15b6dbb8e9a nodeName:}" failed. No retries permitted until 2025-01-27 14:15:41.990217206 +0000 UTC m=+54.632314502 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5ggp7" (UniqueName: "kubernetes.io/projected/4448ceaf-42ef-4523-b9cf-e15b6dbb8e9a-kube-api-access-5ggp7") pod "kubernetes-dashboard-7779f9b69b-xww99" (UID: "4448ceaf-42ef-4523-b9cf-e15b6dbb8e9a") : failed to sync configmap cache: timed out waiting for the condition
Jan 27 14:15:41 functional-519899 kubelet[4121]: E0127 14:15:41.492597 4121 projected.go:288] Couldn't get configMap kubernetes-dashboard/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
Jan 27 14:15:41 functional-519899 kubelet[4121]: E0127 14:15:41.492631 4121 projected.go:194] Error preparing data for projected volume kube-api-access-d57mv for pod kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b-nz2b5: failed to sync configmap cache: timed out waiting for the condition
Jan 27 14:15:41 functional-519899 kubelet[4121]: E0127 14:15:41.492681 4121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3d473bce-1697-4496-a82d-d858960734dd-kube-api-access-d57mv podName:3d473bce-1697-4496-a82d-d858960734dd nodeName:}" failed. No retries permitted until 2025-01-27 14:15:41.992665395 +0000 UTC m=+54.634762692 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-d57mv" (UniqueName: "kubernetes.io/projected/3d473bce-1697-4496-a82d-d858960734dd-kube-api-access-d57mv") pod "dashboard-metrics-scraper-5d59dccf9b-nz2b5" (UID: "3d473bce-1697-4496-a82d-d858960734dd") : failed to sync configmap cache: timed out waiting for the condition
Jan 27 14:15:42 functional-519899 kubelet[4121]: I0127 14:15:42.736798 4121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=3.498916398 podStartE2EDuration="16.736782332s" podCreationTimestamp="2025-01-27 14:15:26 +0000 UTC" firstStartedPulling="2025-01-27 14:15:27.006288643 +0000 UTC m=+39.648385951" lastFinishedPulling="2025-01-27 14:15:40.244154585 +0000 UTC m=+52.886251885" observedRunningTime="2025-01-27 14:15:40.729273605 +0000 UTC m=+53.371370923" watchObservedRunningTime="2025-01-27 14:15:42.736782332 +0000 UTC m=+55.378879669"
==> storage-provisioner [741e2b57791f2a6b9cc82a3905c0407e120002a74d7042a0b10e24dda8771d0b] <==
I0127 14:15:03.583036 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0127 14:15:03.592190 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0127 14:15:03.592317 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0127 14:15:20.992279 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0127 14:15:20.992547 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-519899_442633ac-3473-4659-bbca-7d24631815f3!
I0127 14:15:20.994925 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ce168104-e6dd-4f11-acf6-bb68648c0c5d", APIVersion:"v1", ResourceVersion:"686", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-519899_442633ac-3473-4659-bbca-7d24631815f3 became leader
I0127 14:15:21.092816 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-519899_442633ac-3473-4659-bbca-7d24631815f3!
I0127 14:15:26.333138 1 controller.go:1332] provision "default/myclaim" class "standard": started
I0127 14:15:26.333233 1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard 4506b111-2e39-4585-8dd1-135ea464d2bf 343 0 2025-01-27 14:13:25 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
storageclass.kubernetes.io/is-default-class:true] [] [] [{kubectl-client-side-apply Update storage.k8s.io/v1 2025-01-27 14:13:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-14595ad7-79ad-446f-a8c9-01ca94d81850 &PersistentVolumeClaim{ObjectMeta:{myclaim default 14595ad7-79ad-446f-a8c9-01ca94d81850 711 0 2025-01-27 14:15:26 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection] [{kube-controller-manager Update v1 2025-01-27 14:15:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2025-01-27 14:15:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
I0127 14:15:26.334302 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"14595ad7-79ad-446f-a8c9-01ca94d81850", APIVersion:"v1", ResourceVersion:"711", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
I0127 14:15:26.334520 1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-14595ad7-79ad-446f-a8c9-01ca94d81850" provisioned
I0127 14:15:26.334549 1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
I0127 14:15:26.334559 1 volume_store.go:212] Trying to save persistentvolume "pvc-14595ad7-79ad-446f-a8c9-01ca94d81850"
I0127 14:15:26.346518 1 volume_store.go:219] persistentvolume "pvc-14595ad7-79ad-446f-a8c9-01ca94d81850" saved
I0127 14:15:26.348882 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"14595ad7-79ad-446f-a8c9-01ca94d81850", APIVersion:"v1", ResourceVersion:"711", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-14595ad7-79ad-446f-a8c9-01ca94d81850
==> storage-provisioner [a222c673336ad7fe23ea996ccdf08c785794fff38802bbbf6670e06444f2312a] <==
I0127 14:14:51.938428 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F0127 14:14:51.943524 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-519899 -n functional-519899
helpers_test.go:261: (dbg) Run: kubectl --context functional-519899 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount mysql-58ccfd96bb-htngb dashboard-metrics-scraper-5d59dccf9b-nz2b5 kubernetes-dashboard-7779f9b69b-xww99
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context functional-519899 describe pod busybox-mount mysql-58ccfd96bb-htngb dashboard-metrics-scraper-5d59dccf9b-nz2b5 kubernetes-dashboard-7779f9b69b-xww99
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-519899 describe pod busybox-mount mysql-58ccfd96bb-htngb dashboard-metrics-scraper-5d59dccf9b-nz2b5 kubernetes-dashboard-7779f9b69b-xww99: exit status 1 (77.738537ms)
-- stdout --
Name: busybox-mount
Namespace: default
Priority: 0
Service Account: default
Node: functional-519899/192.168.39.137
Start Time: Mon, 27 Jan 2025 14:15:33 +0000
Labels: integration-test=busybox-mount
Annotations: <none>
Status: Pending
IP: 10.244.0.8
IPs:
IP: 10.244.0.8
Containers:
mount-munger:
Container ID: containerd://f3c9ec90b51495d344b9a86e51cdfabdec2d93082c369fdbbeffa401399807ef
Image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID: gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
Port: <none>
Host Port: <none>
Command:
/bin/sh
-c
--
Args:
cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
State: Terminated
Reason: Completed
Exit Code: 0
Started: Mon, 27 Jan 2025 14:15:41 +0000
Finished: Mon, 27 Jan 2025 14:15:41 +0000
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/mount-9p from test-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-frpc9 (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
test-volume:
Type: HostPath (bare host directory volume)
Path: /mount-9p
HostPathType:
kube-api-access-frpc9:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 10s default-scheduler Successfully assigned default/busybox-mount to functional-519899
Normal Pulling 10s kubelet Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Normal Pulled 2s kubelet Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.523s (8.176s including waiting). Image size: 2395207 bytes.
Normal Created 2s kubelet Created container: mount-munger
Normal Started 2s kubelet Started container mount-munger
Name: mysql-58ccfd96bb-htngb
Namespace: default
Priority: 0
Service Account: default
Node: functional-519899/192.168.39.137
Start Time: Mon, 27 Jan 2025 14:15:34 +0000
Labels: app=mysql
pod-template-hash=58ccfd96bb
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/mysql-58ccfd96bb
Containers:
mysql:
Container ID:
Image: docker.io/mysql:5.7
Image ID:
Port: 3306/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Limits:
cpu: 700m
memory: 700Mi
Requests:
cpu: 600m
memory: 512Mi
Environment:
MYSQL_ROOT_PASSWORD: password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-g48t4 (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-g48t4:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 9s default-scheduler Successfully assigned default/mysql-58ccfd96bb-htngb to functional-519899
Normal Pulling 9s kubelet Pulling image "docker.io/mysql:5.7"
-- /stdout --
** stderr **
Error from server (NotFound): pods "dashboard-metrics-scraper-5d59dccf9b-nz2b5" not found
Error from server (NotFound): pods "kubernetes-dashboard-7779f9b69b-xww99" not found
** /stderr **
helpers_test.go:279: kubectl --context functional-519899 describe pod busybox-mount mysql-58ccfd96bb-htngb dashboard-metrics-scraper-5d59dccf9b-nz2b5 kubernetes-dashboard-7779f9b69b-xww99: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (4.60s)