=== RUN TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd
=== CONT TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-233546 --alsologtostderr -v=1]
functional_test.go:935: output didn't produce a URL
functional_test.go:927: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-233546 --alsologtostderr -v=1] ...
functional_test.go:927: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-233546 --alsologtostderr -v=1] stdout:
functional_test.go:927: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-233546 --alsologtostderr -v=1] stderr:
I0407 12:16:09.712228 1250704 out.go:345] Setting OutFile to fd 1 ...
I0407 12:16:09.713109 1250704 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:16:09.713150 1250704 out.go:358] Setting ErrFile to fd 2...
I0407 12:16:09.713176 1250704 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:16:09.713743 1250704 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-1236688/.minikube/bin
I0407 12:16:09.714540 1250704 mustload.go:65] Loading cluster: functional-233546
I0407 12:16:09.714968 1250704 config.go:182] Loaded profile config "functional-233546": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0407 12:16:09.715328 1250704 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0407 12:16:09.715382 1250704 main.go:141] libmachine: Launching plugin server for driver kvm2
I0407 12:16:09.732691 1250704 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43555
I0407 12:16:09.733379 1250704 main.go:141] libmachine: () Calling .GetVersion
I0407 12:16:09.734066 1250704 main.go:141] libmachine: Using API Version 1
I0407 12:16:09.734094 1250704 main.go:141] libmachine: () Calling .SetConfigRaw
I0407 12:16:09.734613 1250704 main.go:141] libmachine: () Calling .GetMachineName
I0407 12:16:09.734875 1250704 main.go:141] libmachine: (functional-233546) Calling .GetState
I0407 12:16:09.736903 1250704 host.go:66] Checking if "functional-233546" exists ...
I0407 12:16:09.737369 1250704 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0407 12:16:09.737433 1250704 main.go:141] libmachine: Launching plugin server for driver kvm2
I0407 12:16:09.754926 1250704 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46679
I0407 12:16:09.755458 1250704 main.go:141] libmachine: () Calling .GetVersion
I0407 12:16:09.755971 1250704 main.go:141] libmachine: Using API Version 1
I0407 12:16:09.755995 1250704 main.go:141] libmachine: () Calling .SetConfigRaw
I0407 12:16:09.756396 1250704 main.go:141] libmachine: () Calling .GetMachineName
I0407 12:16:09.756625 1250704 main.go:141] libmachine: (functional-233546) Calling .DriverName
I0407 12:16:09.756820 1250704 api_server.go:166] Checking apiserver status ...
I0407 12:16:09.756890 1250704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0407 12:16:09.756927 1250704 main.go:141] libmachine: (functional-233546) Calling .GetSSHHostname
I0407 12:16:09.760034 1250704 main.go:141] libmachine: (functional-233546) DBG | domain functional-233546 has defined MAC address 52:54:00:cf:83:b5 in network mk-functional-233546
I0407 12:16:09.760474 1250704 main.go:141] libmachine: (functional-233546) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:83:b5", ip: ""} in network mk-functional-233546: {Iface:virbr1 ExpiryTime:2025-04-07 13:12:51 +0000 UTC Type:0 Mac:52:54:00:cf:83:b5 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:functional-233546 Clientid:01:52:54:00:cf:83:b5}
I0407 12:16:09.760517 1250704 main.go:141] libmachine: (functional-233546) DBG | domain functional-233546 has defined IP address 192.168.39.145 and MAC address 52:54:00:cf:83:b5 in network mk-functional-233546
I0407 12:16:09.760677 1250704 main.go:141] libmachine: (functional-233546) Calling .GetSSHPort
I0407 12:16:09.760907 1250704 main.go:141] libmachine: (functional-233546) Calling .GetSSHKeyPath
I0407 12:16:09.761097 1250704 main.go:141] libmachine: (functional-233546) Calling .GetSSHUsername
I0407 12:16:09.761273 1250704 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1236688/.minikube/machines/functional-233546/id_rsa Username:docker}
I0407 12:16:09.876087 1250704 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4540/cgroup
W0407 12:16:09.903303 1250704 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4540/cgroup: Process exited with status 1
stdout:
stderr:
I0407 12:16:09.903369 1250704 ssh_runner.go:195] Run: ls
I0407 12:16:09.924918 1250704 api_server.go:253] Checking apiserver healthz at https://192.168.39.145:8441/healthz ...
I0407 12:16:09.929539 1250704 api_server.go:279] https://192.168.39.145:8441/healthz returned 200:
ok
W0407 12:16:09.929601 1250704 out.go:270] * Enabling dashboard ...
* Enabling dashboard ...
I0407 12:16:09.929827 1250704 config.go:182] Loaded profile config "functional-233546": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0407 12:16:09.929851 1250704 addons.go:69] Setting dashboard=true in profile "functional-233546"
I0407 12:16:09.929863 1250704 addons.go:238] Setting addon dashboard=true in "functional-233546"
I0407 12:16:09.929895 1250704 host.go:66] Checking if "functional-233546" exists ...
I0407 12:16:09.930324 1250704 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0407 12:16:09.930376 1250704 main.go:141] libmachine: Launching plugin server for driver kvm2
I0407 12:16:09.947634 1250704 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39079
I0407 12:16:09.948284 1250704 main.go:141] libmachine: () Calling .GetVersion
I0407 12:16:09.948962 1250704 main.go:141] libmachine: Using API Version 1
I0407 12:16:09.948985 1250704 main.go:141] libmachine: () Calling .SetConfigRaw
I0407 12:16:09.949442 1250704 main.go:141] libmachine: () Calling .GetMachineName
I0407 12:16:09.950419 1250704 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0407 12:16:09.950491 1250704 main.go:141] libmachine: Launching plugin server for driver kvm2
I0407 12:16:09.967269 1250704 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41457
I0407 12:16:09.967771 1250704 main.go:141] libmachine: () Calling .GetVersion
I0407 12:16:09.968214 1250704 main.go:141] libmachine: Using API Version 1
I0407 12:16:09.968238 1250704 main.go:141] libmachine: () Calling .SetConfigRaw
I0407 12:16:09.968585 1250704 main.go:141] libmachine: () Calling .GetMachineName
I0407 12:16:09.968782 1250704 main.go:141] libmachine: (functional-233546) Calling .GetState
I0407 12:16:09.970341 1250704 main.go:141] libmachine: (functional-233546) Calling .DriverName
I0407 12:16:09.972443 1250704 out.go:177] - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I0407 12:16:09.974121 1250704 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0407 12:16:09.975645 1250704 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0407 12:16:09.975669 1250704 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0407 12:16:09.975696 1250704 main.go:141] libmachine: (functional-233546) Calling .GetSSHHostname
I0407 12:16:09.979030 1250704 main.go:141] libmachine: (functional-233546) DBG | domain functional-233546 has defined MAC address 52:54:00:cf:83:b5 in network mk-functional-233546
I0407 12:16:09.979535 1250704 main.go:141] libmachine: (functional-233546) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:83:b5", ip: ""} in network mk-functional-233546: {Iface:virbr1 ExpiryTime:2025-04-07 13:12:51 +0000 UTC Type:0 Mac:52:54:00:cf:83:b5 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:functional-233546 Clientid:01:52:54:00:cf:83:b5}
I0407 12:16:09.979566 1250704 main.go:141] libmachine: (functional-233546) DBG | domain functional-233546 has defined IP address 192.168.39.145 and MAC address 52:54:00:cf:83:b5 in network mk-functional-233546
I0407 12:16:09.979757 1250704 main.go:141] libmachine: (functional-233546) Calling .GetSSHPort
I0407 12:16:09.980028 1250704 main.go:141] libmachine: (functional-233546) Calling .GetSSHKeyPath
I0407 12:16:09.980193 1250704 main.go:141] libmachine: (functional-233546) Calling .GetSSHUsername
I0407 12:16:09.980349 1250704 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1236688/.minikube/machines/functional-233546/id_rsa Username:docker}
I0407 12:16:10.121650 1250704 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0407 12:16:10.121673 1250704 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0407 12:16:10.154775 1250704 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0407 12:16:10.154802 1250704 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0407 12:16:10.183661 1250704 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0407 12:16:10.183684 1250704 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0407 12:16:10.232604 1250704 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0407 12:16:10.232630 1250704 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I0407 12:16:10.260457 1250704 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0407 12:16:10.260491 1250704 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0407 12:16:10.300634 1250704 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0407 12:16:10.300680 1250704 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0407 12:16:10.327854 1250704 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0407 12:16:10.327885 1250704 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0407 12:16:10.360516 1250704 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0407 12:16:10.360540 1250704 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0407 12:16:10.381534 1250704 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0407 12:16:10.381563 1250704 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0407 12:16:10.404451 1250704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0407 12:16:11.694908 1250704 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.290394476s)
I0407 12:16:11.695003 1250704 main.go:141] libmachine: Making call to close driver server
I0407 12:16:11.695020 1250704 main.go:141] libmachine: (functional-233546) Calling .Close
I0407 12:16:11.695350 1250704 main.go:141] libmachine: Successfully made call to close driver server
I0407 12:16:11.695357 1250704 main.go:141] libmachine: (functional-233546) DBG | Closing plugin on server side
I0407 12:16:11.695371 1250704 main.go:141] libmachine: Making call to close connection to plugin binary
I0407 12:16:11.695382 1250704 main.go:141] libmachine: Making call to close driver server
I0407 12:16:11.695394 1250704 main.go:141] libmachine: (functional-233546) Calling .Close
I0407 12:16:11.695636 1250704 main.go:141] libmachine: Successfully made call to close driver server
I0407 12:16:11.695652 1250704 main.go:141] libmachine: Making call to close connection to plugin binary
I0407 12:16:11.697287 1250704 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p functional-233546 addons enable metrics-server
I0407 12:16:11.698358 1250704 addons.go:201] Writing out "functional-233546" config to set dashboard=true...
W0407 12:16:11.698567 1250704 out.go:270] * Verifying dashboard health ...
* Verifying dashboard health ...
I0407 12:16:11.699185 1250704 kapi.go:59] client config for functional-233546: &rest.Config{Host:"https://192.168.39.145:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/functional-233546/client.crt", KeyFile:"/home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/functional-233546/client.key", CAFile:"/home/jenkins/minikube-integration/20602-1236688/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(n
il), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24968c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0407 12:16:11.699570 1250704 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0407 12:16:11.699592 1250704 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0407 12:16:11.699608 1250704 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0407 12:16:11.699613 1250704 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0407 12:16:11.738429 1250704 service.go:214] Found service: &Service{ObjectMeta:{kubernetes-dashboard kubernetes-dashboard 64b6e7e9-dc75-4890-9e13-a0f119fa2f10 752 0 2025-04-07 12:16:11 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-04-07 12:16:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.103.216.199,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.103.216.199],IPFamilies:[IPv4],AllocateLoadBalan
cerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W0407 12:16:11.738591 1250704 out.go:270] * Launching proxy ...
* Launching proxy ...
I0407 12:16:11.738661 1250704 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-233546 proxy --port 36195]
I0407 12:16:11.738969 1250704 dashboard.go:157] Waiting for kubectl to output host:port ...
I0407 12:16:11.793540 1250704 out.go:201]
W0407 12:16:11.794993 1250704 out.go:270] X Exiting due to HOST_KUBECTL_PROXY: kubectl proxy: readByteWithTimeout: EOF
X Exiting due to HOST_KUBECTL_PROXY: kubectl proxy: readByteWithTimeout: EOF
W0407 12:16:11.795017 1250704 out.go:270] *
*
W0407 12:16:11.799163 1250704 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ * Please also attach the following file to the GitHub issue: │
│ * - /tmp/minikube_profile_d1ca4947b8443d05a16ba2db66e65ef843e55a01_0.log │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ * Please also attach the following file to the GitHub issue: │
│ * - /tmp/minikube_profile_d1ca4947b8443d05a16ba2db66e65ef843e55a01_0.log │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0407 12:16:11.800968 1250704 out.go:201]
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p functional-233546 -n functional-233546
helpers_test.go:244: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p functional-233546 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-233546 logs -n 25: (2.096770394s)
helpers_test.go:252: TestFunctional/parallel/DashboardCmd logs:
-- stdout --
==> Audit <==
|-----------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|-----------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
| cache | delete | minikube | jenkins | v1.35.0 | 07 Apr 25 12:15 UTC | 07 Apr 25 12:15 UTC |
| | registry.k8s.io/pause:3.1 | | | | | |
| cache | delete | minikube | jenkins | v1.35.0 | 07 Apr 25 12:15 UTC | 07 Apr 25 12:15 UTC |
| | registry.k8s.io/pause:latest | | | | | |
| kubectl | functional-233546 kubectl -- | functional-233546 | jenkins | v1.35.0 | 07 Apr 25 12:15 UTC | 07 Apr 25 12:15 UTC |
| | --context functional-233546 | | | | | |
| | get pods | | | | | |
| start | -p functional-233546 | functional-233546 | jenkins | v1.35.0 | 07 Apr 25 12:15 UTC | 07 Apr 25 12:16 UTC |
| | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision | | | | | |
| | --wait=all | | | | | |
| service | invalid-svc -p | functional-233546 | jenkins | v1.35.0 | 07 Apr 25 12:16 UTC | |
| | functional-233546 | | | | | |
| cp | functional-233546 cp | functional-233546 | jenkins | v1.35.0 | 07 Apr 25 12:16 UTC | 07 Apr 25 12:16 UTC |
| | testdata/cp-test.txt | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| config | functional-233546 config unset | functional-233546 | jenkins | v1.35.0 | 07 Apr 25 12:16 UTC | 07 Apr 25 12:16 UTC |
| | cpus | | | | | |
| config | functional-233546 config get | functional-233546 | jenkins | v1.35.0 | 07 Apr 25 12:16 UTC | |
| | cpus | | | | | |
| config | functional-233546 config set | functional-233546 | jenkins | v1.35.0 | 07 Apr 25 12:16 UTC | 07 Apr 25 12:16 UTC |
| | cpus 2 | | | | | |
| config | functional-233546 config get | functional-233546 | jenkins | v1.35.0 | 07 Apr 25 12:16 UTC | 07 Apr 25 12:16 UTC |
| | cpus | | | | | |
| ssh | functional-233546 ssh -n | functional-233546 | jenkins | v1.35.0 | 07 Apr 25 12:16 UTC | 07 Apr 25 12:16 UTC |
| | functional-233546 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| config | functional-233546 config unset | functional-233546 | jenkins | v1.35.0 | 07 Apr 25 12:16 UTC | 07 Apr 25 12:16 UTC |
| | cpus | | | | | |
| config | functional-233546 config get | functional-233546 | jenkins | v1.35.0 | 07 Apr 25 12:16 UTC | |
| | cpus | | | | | |
| start | -p functional-233546 | functional-233546 | jenkins | v1.35.0 | 07 Apr 25 12:16 UTC | |
| | --dry-run --memory | | | | | |
| | 250MB --alsologtostderr | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| cp | functional-233546 cp | functional-233546 | jenkins | v1.35.0 | 07 Apr 25 12:16 UTC | 07 Apr 25 12:16 UTC |
| | functional-233546:/home/docker/cp-test.txt | | | | | |
| | /tmp/TestFunctionalparallelCpCmd4024997346/001/cp-test.txt | | | | | |
| start | -p functional-233546 | functional-233546 | jenkins | v1.35.0 | 07 Apr 25 12:16 UTC | |
| | --dry-run --memory | | | | | |
| | 250MB --alsologtostderr | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| start | -p functional-233546 | functional-233546 | jenkins | v1.35.0 | 07 Apr 25 12:16 UTC | |
| | --dry-run --alsologtostderr | | | | | |
| | -v=1 --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | functional-233546 ssh -n | functional-233546 | jenkins | v1.35.0 | 07 Apr 25 12:16 UTC | 07 Apr 25 12:16 UTC |
| | functional-233546 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| dashboard | --url --port 36195 | functional-233546 | jenkins | v1.35.0 | 07 Apr 25 12:16 UTC | |
| | -p functional-233546 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| cp | functional-233546 cp | functional-233546 | jenkins | v1.35.0 | 07 Apr 25 12:16 UTC | 07 Apr 25 12:16 UTC |
| | testdata/cp-test.txt | | | | | |
| | /tmp/does/not/exist/cp-test.txt | | | | | |
| ssh | functional-233546 ssh -n | functional-233546 | jenkins | v1.35.0 | 07 Apr 25 12:16 UTC | 07 Apr 25 12:16 UTC |
| | functional-233546 sudo cat | | | | | |
| | /tmp/does/not/exist/cp-test.txt | | | | | |
| ssh | functional-233546 ssh echo | functional-233546 | jenkins | v1.35.0 | 07 Apr 25 12:16 UTC | 07 Apr 25 12:16 UTC |
| | hello | | | | | |
| ssh | functional-233546 ssh cat | functional-233546 | jenkins | v1.35.0 | 07 Apr 25 12:16 UTC | 07 Apr 25 12:16 UTC |
| | /etc/hostname | | | | | |
| ssh | functional-233546 ssh findmnt | functional-233546 | jenkins | v1.35.0 | 07 Apr 25 12:16 UTC | |
| | -T /mount-9p | grep 9p | | | | | |
| mount | -p functional-233546 | functional-233546 | jenkins | v1.35.0 | 07 Apr 25 12:16 UTC | |
| | /tmp/TestFunctionalparallelMountCmdany-port3835906352/001:/mount-9p | | | | | |
| | --alsologtostderr -v=1 | | | | | |
|-----------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2025/04/07 12:16:09
Running on machine: ubuntu-20-agent-4
Binary: Built with gc go1.24.0 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0407 12:16:09.533843 1250634 out.go:345] Setting OutFile to fd 1 ...
I0407 12:16:09.533992 1250634 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:16:09.534005 1250634 out.go:358] Setting ErrFile to fd 2...
I0407 12:16:09.534014 1250634 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:16:09.534488 1250634 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-1236688/.minikube/bin
I0407 12:16:09.535299 1250634 out.go:352] Setting JSON to false
I0407 12:16:09.536680 1250634 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":28716,"bootTime":1743999454,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0407 12:16:09.536770 1250634 start.go:139] virtualization: kvm guest
I0407 12:16:09.540255 1250634 out.go:177] * [functional-233546] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
I0407 12:16:09.541582 1250634 notify.go:220] Checking for updates...
I0407 12:16:09.541607 1250634 out.go:177] - MINIKUBE_LOCATION=20602
I0407 12:16:09.542911 1250634 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0407 12:16:09.544355 1250634 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20602-1236688/kubeconfig
I0407 12:16:09.545753 1250634 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-1236688/.minikube
I0407 12:16:09.547015 1250634 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0407 12:16:09.548247 1250634 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0407 12:16:09.550034 1250634 config.go:182] Loaded profile config "functional-233546": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0407 12:16:09.550730 1250634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0407 12:16:09.550823 1250634 main.go:141] libmachine: Launching plugin server for driver kvm2
I0407 12:16:09.569636 1250634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37635
I0407 12:16:09.570395 1250634 main.go:141] libmachine: () Calling .GetVersion
I0407 12:16:09.571017 1250634 main.go:141] libmachine: Using API Version 1
I0407 12:16:09.571038 1250634 main.go:141] libmachine: () Calling .SetConfigRaw
I0407 12:16:09.571949 1250634 main.go:141] libmachine: () Calling .GetMachineName
I0407 12:16:09.572119 1250634 main.go:141] libmachine: (functional-233546) Calling .DriverName
I0407 12:16:09.572356 1250634 driver.go:394] Setting default libvirt URI to qemu:///system
I0407 12:16:09.572665 1250634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0407 12:16:09.572699 1250634 main.go:141] libmachine: Launching plugin server for driver kvm2
I0407 12:16:09.601162 1250634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41725
I0407 12:16:09.601731 1250634 main.go:141] libmachine: () Calling .GetVersion
I0407 12:16:09.602323 1250634 main.go:141] libmachine: Using API Version 1
I0407 12:16:09.602344 1250634 main.go:141] libmachine: () Calling .SetConfigRaw
I0407 12:16:09.602663 1250634 main.go:141] libmachine: () Calling .GetMachineName
I0407 12:16:09.602843 1250634 main.go:141] libmachine: (functional-233546) Calling .DriverName
I0407 12:16:09.640492 1250634 out.go:177] * Using the kvm2 driver based on existing profile
I0407 12:16:09.641925 1250634 start.go:297] selected driver: kvm2
I0407 12:16:09.641948 1250634 start.go:901] validating driver "kvm2" against &{Name:functional-233546 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:functional-233546 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountStr
ing:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0407 12:16:09.642050 1250634 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0407 12:16:09.643103 1250634 cni.go:84] Creating CNI manager for ""
I0407 12:16:09.643157 1250634 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0407 12:16:09.643208 1250634 start.go:340] cluster config:
{Name:functional-233546 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-233546 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0407 12:16:09.645376 1250634 out.go:177] * dry-run validation complete!
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
95ae599649ebd 82e4c8a736a4f Less than a second ago Running echoserver 0 540ed9b3608ac hello-node-fcfd88b6f-2mtcl
3426251924490 6e38f40d628db 14 seconds ago Running storage-provisioner 4 5f64d91128370 storage-provisioner
b898d99e3b3ae f1332858868e1 30 seconds ago Running kube-proxy 2 36e46290dc1a8 kube-proxy-5r4lm
4df8c896b4853 6e38f40d628db 30 seconds ago Exited storage-provisioner 3 5f64d91128370 storage-provisioner
eb4b247cd1c35 85b7a174738ba 33 seconds ago Running kube-apiserver 0 bf9cbf0b23e92 kube-apiserver-functional-233546
2bff6a336fd42 b6a454c5a800d 34 seconds ago Running kube-controller-manager 2 9a5fc45d42c0a kube-controller-manager-functional-233546
e5eb6664340a4 d8e673e7c9983 34 seconds ago Running kube-scheduler 2 795bf3b68273a kube-scheduler-functional-233546
197df4b827f4c a9e7e6b294baf 34 seconds ago Running etcd 2 40376fdcdf4f5 etcd-functional-233546
facb218d99873 c69fa2e9cbf5f 36 seconds ago Running coredns 2 357791f81da5d coredns-668d6bf9bc-j5tfb
e4345a0980955 a9e7e6b294baf About a minute ago Exited etcd 1 40376fdcdf4f5 etcd-functional-233546
2652a6574b833 b6a454c5a800d About a minute ago Exited kube-controller-manager 1 9a5fc45d42c0a kube-controller-manager-functional-233546
7ba8dc3a6e3d9 d8e673e7c9983 About a minute ago Exited kube-scheduler 1 795bf3b68273a kube-scheduler-functional-233546
a512d3610ad3a c69fa2e9cbf5f 2 minutes ago Exited coredns 1 357791f81da5d coredns-668d6bf9bc-j5tfb
83ac8f474aa83 f1332858868e1 2 minutes ago Exited kube-proxy 1 36e46290dc1a8 kube-proxy-5r4lm
==> containerd <==
Apr 07 12:16:11 functional-233546 containerd[3705]: time="2025-04-07T12:16:11.806072261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kubernetes-dashboard-7779f9b69b-4xpfh,Uid:2a78a4ff-016a-48f2-a823-a55d7246439a,Namespace:kubernetes-dashboard,Attempt:0,}"
Apr 07 12:16:11 functional-233546 containerd[3705]: time="2025-04-07T12:16:11.816875558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:dashboard-metrics-scraper-5d59dccf9b-ccc5d,Uid:b30f4bda-f591-4591-a991-b90b12032927,Namespace:kubernetes-dashboard,Attempt:0,}"
Apr 07 12:16:12 functional-233546 containerd[3705]: time="2025-04-07T12:16:12.226187153Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Apr 07 12:16:12 functional-233546 containerd[3705]: time="2025-04-07T12:16:12.226661693Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Apr 07 12:16:12 functional-233546 containerd[3705]: time="2025-04-07T12:16:12.228066492Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 07 12:16:12 functional-233546 containerd[3705]: time="2025-04-07T12:16:12.229248874Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 07 12:16:12 functional-233546 containerd[3705]: time="2025-04-07T12:16:12.266993440Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Apr 07 12:16:12 functional-233546 containerd[3705]: time="2025-04-07T12:16:12.280025363Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Apr 07 12:16:12 functional-233546 containerd[3705]: time="2025-04-07T12:16:12.280044287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 07 12:16:12 functional-233546 containerd[3705]: time="2025-04-07T12:16:12.280447688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 07 12:16:12 functional-233546 containerd[3705]: time="2025-04-07T12:16:12.404443227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:dashboard-metrics-scraper-5d59dccf9b-ccc5d,Uid:b30f4bda-f591-4591-a991-b90b12032927,Namespace:kubernetes-dashboard,Attempt:0,} returns sandbox id \"bd96ee207ed4c3b7eb0b9a99ded736e5fd8f80d187f0b3e1b2a0915e213057cb\""
Apr 07 12:16:12 functional-233546 containerd[3705]: time="2025-04-07T12:16:12.466348417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kubernetes-dashboard-7779f9b69b-4xpfh,Uid:2a78a4ff-016a-48f2-a823-a55d7246439a,Namespace:kubernetes-dashboard,Attempt:0,} returns sandbox id \"ab1f840778bed65fedeccb0c3e30493a5fe29e57a58030459f1f314af1cd9d63\""
Apr 07 12:16:12 functional-233546 containerd[3705]: time="2025-04-07T12:16:12.754751656Z" level=info msg="ImageCreate event name:\"registry.k8s.io/echoserver:1.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Apr 07 12:16:12 functional-233546 containerd[3705]: time="2025-04-07T12:16:12.758373392Z" level=info msg="stop pulling image registry.k8s.io/echoserver:1.8: active requests=0, bytes read=46245285"
Apr 07 12:16:12 functional-233546 containerd[3705]: time="2025-04-07T12:16:12.760759527Z" level=info msg="ImageCreate event name:\"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Apr 07 12:16:12 functional-233546 containerd[3705]: time="2025-04-07T12:16:12.765477480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Apr 07 12:16:12 functional-233546 containerd[3705]: time="2025-04-07T12:16:12.768202714Z" level=info msg="Pulled image \"registry.k8s.io/echoserver:1.8\" with image id \"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410\", repo tag \"registry.k8s.io/echoserver:1.8\", repo digest \"registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969\", size \"46237695\" in 3.276297307s"
Apr 07 12:16:12 functional-233546 containerd[3705]: time="2025-04-07T12:16:12.768426718Z" level=info msg="PullImage \"registry.k8s.io/echoserver:1.8\" returns image reference \"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410\""
Apr 07 12:16:12 functional-233546 containerd[3705]: time="2025-04-07T12:16:12.772605851Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
Apr 07 12:16:12 functional-233546 containerd[3705]: time="2025-04-07T12:16:12.775012434Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
Apr 07 12:16:12 functional-233546 containerd[3705]: time="2025-04-07T12:16:12.775796039Z" level=info msg="CreateContainer within sandbox \"540ed9b3608acdb05335fe8e1afa688c2c024b253a4731a94caf925a352b8005\" for container &ContainerMetadata{Name:echoserver,Attempt:0,}"
Apr 07 12:16:12 functional-233546 containerd[3705]: time="2025-04-07T12:16:12.813622961Z" level=info msg="CreateContainer within sandbox \"540ed9b3608acdb05335fe8e1afa688c2c024b253a4731a94caf925a352b8005\" for &ContainerMetadata{Name:echoserver,Attempt:0,} returns container id \"95ae599649ebd0986cedb91de209321dc44a13371b920a819833aa9fd074776c\""
Apr 07 12:16:12 functional-233546 containerd[3705]: time="2025-04-07T12:16:12.819630184Z" level=info msg="StartContainer for \"95ae599649ebd0986cedb91de209321dc44a13371b920a819833aa9fd074776c\""
Apr 07 12:16:12 functional-233546 containerd[3705]: time="2025-04-07T12:16:12.870235718Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
Apr 07 12:16:12 functional-233546 containerd[3705]: time="2025-04-07T12:16:12.911092911Z" level=info msg="StartContainer for \"95ae599649ebd0986cedb91de209321dc44a13371b920a819833aa9fd074776c\" returns successfully"
==> coredns [a512d3610ad3ad6b3ab2a1771124b35ffc00f38b1eb9739ef140de0a74a4d675] <==
[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
[INFO] plugin/kubernetes: Trace[1773096416]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (07-Apr-2025 12:14:27.625) (total time: 10002ms):
Trace[1773096416]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (12:14:37.627)
Trace[1773096416]: [10.002043099s] [10.002043099s] END
[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
[INFO] plugin/kubernetes: Trace[886648764]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (07-Apr-2025 12:14:27.865) (total time: 10001ms):
Trace[886648764]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (12:14:37.866)
Trace[886648764]: [10.001464323s] [10.001464323s] END
[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
[INFO] plugin/kubernetes: Trace[469826125]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (07-Apr-2025 12:14:30.028) (total time: 10000ms):
Trace[469826125]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (12:14:40.029)
Trace[469826125]: [10.000964923s] [10.000964923s] END
[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
==> coredns [facb218d99873973466b8da9c18ee84ae539517990cbaa2389637a0a0b1984a9] <==
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
CoreDNS-1.11.3
linux/amd64, go1.21.11, a6338e9
[INFO] 127.0.0.1:47978 - 50391 "HINFO IN 1078490451081813515.6603114991866582823. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.040409887s
==> describe nodes <==
Name: functional-233546
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=functional-233546
kubernetes.io/os=linux
minikube.k8s.io/commit=33e6edc58d2014d70e908473920ef4ac8eae1e43
minikube.k8s.io/name=functional-233546
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_04_07T12_13_19_0700
minikube.k8s.io/version=v1.35.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 07 Apr 2025 12:13:16 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: functional-233546
AcquireTime: <unset>
RenewTime: Mon, 07 Apr 2025 12:16:12 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 07 Apr 2025 12:15:41 +0000 Mon, 07 Apr 2025 12:13:14 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 07 Apr 2025 12:15:41 +0000 Mon, 07 Apr 2025 12:13:14 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 07 Apr 2025 12:15:41 +0000 Mon, 07 Apr 2025 12:13:14 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 07 Apr 2025 12:15:41 +0000 Mon, 07 Apr 2025 12:13:19 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.145
Hostname: functional-233546
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 3912780Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 3912780Ki
pods: 110
System Info:
Machine ID: 9630c455a5b54b73bacb3c6e2a6d0899
System UUID: 9630c455-a5b5-4b73-bacb-3c6e2a6d0899
Boot ID: 7e602861-bc79-4aa5-8bfb-369379362f94
Kernel Version: 5.10.207
OS Image: Buildroot 2023.02.9
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.7.23
Kubelet Version: v1.32.2
Kube-Proxy Version: v1.32.2
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (10 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default hello-node-fcfd88b6f-2mtcl 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5s
kube-system coredns-668d6bf9bc-j5tfb 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 2m50s
kube-system etcd-functional-233546 100m (5%) 0 (0%) 100Mi (2%) 0 (0%) 2m54s
kube-system kube-apiserver-functional-233546 250m (12%) 0 (0%) 0 (0%) 0 (0%) 31s
kube-system kube-controller-manager-functional-233546 200m (10%) 0 (0%) 0 (0%) 0 (0%) 2m54s
kube-system kube-proxy-5r4lm 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m50s
kube-system kube-scheduler-functional-233546 100m (5%) 0 (0%) 0 (0%) 0 (0%) 2m54s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m48s
kubernetes-dashboard dashboard-metrics-scraper-5d59dccf9b-ccc5d 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2s
kubernetes-dashboard kubernetes-dashboard-7779f9b69b-4xpfh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (37%) 0 (0%)
memory 170Mi (4%) 170Mi (4%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 2m48s kube-proxy
Normal Starting 30s kube-proxy
Normal Starting 77s kube-proxy
Normal Starting 2m55s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 2m54s kubelet Node functional-233546 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 2m54s kubelet Node functional-233546 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2m54s kubelet Node functional-233546 status is now: NodeHasSufficientPID
Normal NodeReady 2m54s kubelet Node functional-233546 status is now: NodeReady
Normal NodeAllocatableEnforced 2m54s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 2m51s node-controller Node functional-233546 event: Registered Node functional-233546 in Controller
Normal NodeHasSufficientPID 111s (x7 over 111s) kubelet Node functional-233546 status is now: NodeHasSufficientPID
Normal NodeHasSufficientMemory 111s (x8 over 111s) kubelet Node functional-233546 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 111s (x8 over 111s) kubelet Node functional-233546 status is now: NodeHasNoDiskPressure
Normal Starting 111s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 111s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 85s node-controller Node functional-233546 event: Registered Node functional-233546 in Controller
Normal Starting 35s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 35s (x8 over 35s) kubelet Node functional-233546 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 35s (x8 over 35s) kubelet Node functional-233546 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 35s (x7 over 35s) kubelet Node functional-233546 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 35s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 28s node-controller Node functional-233546 event: Registered Node functional-233546 in Controller
==> dmesg <==
[ +0.155774] systemd-fstab-generator[2155]: Ignoring "noauto" option for root device
[ +0.321402] systemd-fstab-generator[2184]: Ignoring "noauto" option for root device
[ +1.562144] systemd-fstab-generator[2341]: Ignoring "noauto" option for root device
[ +0.082169] kauditd_printk_skb: 102 callbacks suppressed
[ +5.752665] kauditd_printk_skb: 18 callbacks suppressed
[ +10.149438] kauditd_printk_skb: 2 callbacks suppressed
[ +1.943721] systemd-fstab-generator[2951]: Ignoring "noauto" option for root device
[ +19.100643] kauditd_printk_skb: 21 callbacks suppressed
[ +14.969124] kauditd_printk_skb: 11 callbacks suppressed
[Apr 7 12:15] systemd-fstab-generator[3268]: Ignoring "noauto" option for root device
[ +10.979846] systemd-fstab-generator[3630]: Ignoring "noauto" option for root device
[ +0.079581] kauditd_printk_skb: 14 callbacks suppressed
[ +0.077138] systemd-fstab-generator[3642]: Ignoring "noauto" option for root device
[ +0.180331] systemd-fstab-generator[3656]: Ignoring "noauto" option for root device
[ +0.148412] systemd-fstab-generator[3668]: Ignoring "noauto" option for root device
[ +0.316451] systemd-fstab-generator[3697]: Ignoring "noauto" option for root device
[ +1.522538] systemd-fstab-generator[3855]: Ignoring "noauto" option for root device
[ +10.804641] kauditd_printk_skb: 124 callbacks suppressed
[ +5.268945] kauditd_printk_skb: 1 callbacks suppressed
[ +1.750639] systemd-fstab-generator[4321]: Ignoring "noauto" option for root device
[ +4.272161] kauditd_printk_skb: 44 callbacks suppressed
[ +8.908541] kauditd_printk_skb: 4 callbacks suppressed
[ +7.037133] systemd-fstab-generator[4804]: Ignoring "noauto" option for root device
[Apr 7 12:16] kauditd_printk_skb: 17 callbacks suppressed
[ +5.486120] kauditd_printk_skb: 33 callbacks suppressed
==> etcd [197df4b827f4c18b1644b12631adb1571365b5095b41e977e6335528c27fb3f9] <==
{"level":"info","ts":"2025-04-07T12:15:39.412298Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
{"level":"info","ts":"2025-04-07T12:15:39.412355Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
{"level":"info","ts":"2025-04-07T12:15:39.412364Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
{"level":"info","ts":"2025-04-07T12:15:39.412627Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.145:2380"}
{"level":"info","ts":"2025-04-07T12:15:39.412655Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.145:2380"}
{"level":"info","ts":"2025-04-07T12:15:39.413552Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 switched to configuration voters=(4950477381744769801)"}
{"level":"info","ts":"2025-04-07T12:15:39.413631Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"33ee9922f2bf4379","local-member-id":"44b3a0f32f80bb09","added-peer-id":"44b3a0f32f80bb09","added-peer-peer-urls":["https://192.168.39.145:2380"]}
{"level":"info","ts":"2025-04-07T12:15:39.414099Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"33ee9922f2bf4379","local-member-id":"44b3a0f32f80bb09","cluster-version":"3.5"}
{"level":"info","ts":"2025-04-07T12:15:39.414195Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2025-04-07T12:15:40.648479Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 is starting a new election at term 3"}
{"level":"info","ts":"2025-04-07T12:15:40.648607Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 became pre-candidate at term 3"}
{"level":"info","ts":"2025-04-07T12:15:40.648696Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 received MsgPreVoteResp from 44b3a0f32f80bb09 at term 3"}
{"level":"info","ts":"2025-04-07T12:15:40.648749Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 became candidate at term 4"}
{"level":"info","ts":"2025-04-07T12:15:40.648769Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 received MsgVoteResp from 44b3a0f32f80bb09 at term 4"}
{"level":"info","ts":"2025-04-07T12:15:40.648825Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 became leader at term 4"}
{"level":"info","ts":"2025-04-07T12:15:40.648882Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 44b3a0f32f80bb09 elected leader 44b3a0f32f80bb09 at term 4"}
{"level":"info","ts":"2025-04-07T12:15:40.653899Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"44b3a0f32f80bb09","local-member-attributes":"{Name:functional-233546 ClientURLs:[https://192.168.39.145:2379]}","request-path":"/0/members/44b3a0f32f80bb09/attributes","cluster-id":"33ee9922f2bf4379","publish-timeout":"7s"}
{"level":"info","ts":"2025-04-07T12:15:40.653900Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2025-04-07T12:15:40.654209Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2025-04-07T12:15:40.654242Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2025-04-07T12:15:40.653976Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2025-04-07T12:15:40.655003Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2025-04-07T12:15:40.655628Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.145:2379"}
{"level":"info","ts":"2025-04-07T12:15:40.655005Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2025-04-07T12:15:40.656401Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
==> etcd [e4345a09809559fece6a2907582b5baf7697f8e6b4df1c5f37cc46f547216c2b] <==
{"level":"info","ts":"2025-04-07T12:14:43.973379Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 became pre-candidate at term 2"}
{"level":"info","ts":"2025-04-07T12:14:43.973546Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 received MsgPreVoteResp from 44b3a0f32f80bb09 at term 2"}
{"level":"info","ts":"2025-04-07T12:14:43.973656Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 became candidate at term 3"}
{"level":"info","ts":"2025-04-07T12:14:43.973692Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 received MsgVoteResp from 44b3a0f32f80bb09 at term 3"}
{"level":"info","ts":"2025-04-07T12:14:43.973812Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 became leader at term 3"}
{"level":"info","ts":"2025-04-07T12:14:43.973908Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 44b3a0f32f80bb09 elected leader 44b3a0f32f80bb09 at term 3"}
{"level":"info","ts":"2025-04-07T12:14:43.980098Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"44b3a0f32f80bb09","local-member-attributes":"{Name:functional-233546 ClientURLs:[https://192.168.39.145:2379]}","request-path":"/0/members/44b3a0f32f80bb09/attributes","cluster-id":"33ee9922f2bf4379","publish-timeout":"7s"}
{"level":"info","ts":"2025-04-07T12:14:43.980273Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2025-04-07T12:14:43.980674Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2025-04-07T12:14:43.981057Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2025-04-07T12:14:43.981164Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2025-04-07T12:14:43.981737Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2025-04-07T12:14:43.981882Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2025-04-07T12:14:43.982611Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2025-04-07T12:14:43.982944Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.145:2379"}
{"level":"info","ts":"2025-04-07T12:15:31.292110Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2025-04-07T12:15:31.292279Z","caller":"embed/etcd.go:378","msg":"closing etcd server","name":"functional-233546","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.145:2380"],"advertise-client-urls":["https://192.168.39.145:2379"]}
{"level":"warn","ts":"2025-04-07T12:15:31.292372Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"warn","ts":"2025-04-07T12:15:31.292418Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"warn","ts":"2025-04-07T12:15:31.293959Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.145:2379: use of closed network connection"}
{"level":"warn","ts":"2025-04-07T12:15:31.293986Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.145:2379: use of closed network connection"}
{"level":"info","ts":"2025-04-07T12:15:31.294021Z","caller":"etcdserver/server.go:1543","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"44b3a0f32f80bb09","current-leader-member-id":"44b3a0f32f80bb09"}
{"level":"info","ts":"2025-04-07T12:15:31.297307Z","caller":"embed/etcd.go:582","msg":"stopping serving peer traffic","address":"192.168.39.145:2380"}
{"level":"info","ts":"2025-04-07T12:15:31.297465Z","caller":"embed/etcd.go:587","msg":"stopped serving peer traffic","address":"192.168.39.145:2380"}
{"level":"info","ts":"2025-04-07T12:15:31.297477Z","caller":"embed/etcd.go:380","msg":"closed etcd server","name":"functional-233546","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.145:2380"],"advertise-client-urls":["https://192.168.39.145:2379"]}
==> kernel <==
12:16:13 up 3 min, 0 users, load average: 0.85, 0.39, 0.15
Linux functional-233546 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2023.02.9"
==> kube-apiserver [eb4b247cd1c35a9cb133f39aa18078cd0fdd0eb8cb6abf9e8b2bb467bdfb14a0] <==
I0407 12:15:41.861936 1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
I0407 12:15:41.867257 1 aggregator.go:171] initial CRD sync complete...
I0407 12:15:41.867382 1 autoregister_controller.go:144] Starting autoregister controller
I0407 12:15:41.867471 1 cache.go:32] Waiting for caches to sync for autoregister controller
I0407 12:15:41.867555 1 cache.go:39] Caches are synced for autoregister controller
I0407 12:15:41.867898 1 handler_discovery.go:451] Starting ResourceDiscoveryManager
I0407 12:15:41.881472 1 shared_informer.go:320] Caches are synced for node_authorizer
I0407 12:15:41.894613 1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
I0407 12:15:41.896970 1 policy_source.go:240] refreshing policies
I0407 12:15:41.967562 1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
I0407 12:15:42.493627 1 controller.go:615] quota admission added evaluator for: serviceaccounts
I0407 12:15:42.771979 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
W0407 12:15:43.181303 1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.145]
I0407 12:15:43.182859 1 controller.go:615] quota admission added evaluator for: endpoints
I0407 12:15:43.195612 1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0407 12:15:43.711106 1 controller.go:615] quota admission added evaluator for: deployments.apps
I0407 12:15:43.742644 1 controller.go:615] quota admission added evaluator for: daemonsets.apps
I0407 12:15:43.766740 1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0407 12:15:43.775435 1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0407 12:15:51.406804 1 controller.go:615] quota admission added evaluator for: replicasets.apps
I0407 12:16:04.471200 1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.105.17.122"}
I0407 12:16:08.934405 1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.98.125.231"}
I0407 12:16:11.163659 1 controller.go:615] quota admission added evaluator for: namespaces
I0407 12:16:11.608536 1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.216.199"}
I0407 12:16:11.666242 1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.46.84"}
==> kube-controller-manager [2652a6574b833b7c944b3ec7af899a37b1b146af97c909e84a55c61a00761b3d] <==
I0407 12:14:48.291362 1 shared_informer.go:320] Caches are synced for GC
I0407 12:14:48.291984 1 shared_informer.go:320] Caches are synced for PV protection
I0407 12:14:48.295646 1 shared_informer.go:320] Caches are synced for namespace
I0407 12:14:48.296064 1 shared_informer.go:320] Caches are synced for resource quota
I0407 12:14:48.297830 1 shared_informer.go:320] Caches are synced for job
I0407 12:14:48.299915 1 shared_informer.go:320] Caches are synced for PVC protection
I0407 12:14:48.302076 1 shared_informer.go:320] Caches are synced for TTL after finished
I0407 12:14:48.303769 1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
I0407 12:14:48.305451 1 shared_informer.go:313] Waiting for caches to sync for garbage collector
I0407 12:14:48.305865 1 shared_informer.go:320] Caches are synced for ReplicaSet
I0407 12:14:48.311697 1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
I0407 12:14:48.315035 1 shared_informer.go:320] Caches are synced for attach detach
I0407 12:14:48.315353 1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
I0407 12:14:48.315430 1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
I0407 12:14:48.316901 1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
I0407 12:14:48.317154 1 shared_informer.go:320] Caches are synced for endpoint
I0407 12:14:48.319214 1 shared_informer.go:320] Caches are synced for ReplicationController
I0407 12:14:48.389864 1 shared_informer.go:320] Caches are synced for garbage collector
I0407 12:14:48.389905 1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
I0407 12:14:48.389912 1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
I0407 12:14:48.406428 1 shared_informer.go:320] Caches are synced for garbage collector
I0407 12:14:48.750504 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="444.415803ms"
I0407 12:14:48.750799 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="104.246µs"
I0407 12:15:06.437924 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="14.023522ms"
I0407 12:15:06.439413 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="65.807µs"
==> kube-controller-manager [2bff6a336fd42295929b35cad337011d7045cde43f055de621513e49af53c6b8] <==
I0407 12:16:08.910309 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-fcfd88b6f" duration="41.673µs"
I0407 12:16:11.372953 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="96.325388ms"
E0407 12:16:11.372981 1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
I0407 12:16:11.381452 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="78.433824ms"
E0407 12:16:11.381505 1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
I0407 12:16:11.399195 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="17.654995ms"
E0407 12:16:11.400522 1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
I0407 12:16:11.402557 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="18.822831ms"
E0407 12:16:11.402808 1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
I0407 12:16:11.418362 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="14.484461ms"
E0407 12:16:11.418561 1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
I0407 12:16:11.418771 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="14.814738ms"
E0407 12:16:11.418788 1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
I0407 12:16:11.439437 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="18.060389ms"
E0407 12:16:11.439465 1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
I0407 12:16:11.488001 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="67.919753ms"
I0407 12:16:11.515815 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="72.771074ms"
I0407 12:16:11.532577 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="44.538381ms"
I0407 12:16:11.532638 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="38.49µs"
I0407 12:16:11.540199 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="23.106878ms"
I0407 12:16:11.540747 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="24.271µs"
I0407 12:16:11.562389 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="26.897µs"
I0407 12:16:11.589955 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="232.701µs"
I0407 12:16:13.760921 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-fcfd88b6f" duration="16.087519ms"
I0407 12:16:13.762360 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-fcfd88b6f" duration="26.226µs"
==> kube-proxy [83ac8f474aa83fecaf8b8fe842ce71da4ebef34dece8588eb38ed457875766e2] <==
add table ip6 kube-proxy
^^^^^^^^^^^^^^^^^^^^^^^^^
>
E0407 12:14:10.388037 1 server.go:687] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-233546\": dial tcp 192.168.39.145:8441: connect: connection refused"
E0407 12:14:11.544447 1 server.go:687] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-233546\": dial tcp 192.168.39.145:8441: connect: connection refused"
E0407 12:14:13.613937 1 server.go:687] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-233546\": dial tcp 192.168.39.145:8441: connect: connection refused"
E0407 12:14:18.275283 1 server.go:687] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-233546\": dial tcp 192.168.39.145:8441: connect: connection refused"
E0407 12:14:37.783491 1 server.go:687] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-233546\": net/http: TLS handshake timeout"
I0407 12:14:55.870634 1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.145"]
E0407 12:14:55.870978 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I0407 12:14:55.933223 1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
I0407 12:14:55.933279 1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I0407 12:14:55.933327 1 server_linux.go:170] "Using iptables Proxier"
I0407 12:14:55.935870 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I0407 12:14:55.936375 1 server.go:497] "Version info" version="v1.32.2"
I0407 12:14:55.936402 1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0407 12:14:55.938308 1 config.go:199] "Starting service config controller"
I0407 12:14:55.938345 1 shared_informer.go:313] Waiting for caches to sync for service config
I0407 12:14:55.938544 1 config.go:105] "Starting endpoint slice config controller"
I0407 12:14:55.938679 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0407 12:14:55.939306 1 config.go:329] "Starting node config controller"
I0407 12:14:55.939336 1 shared_informer.go:313] Waiting for caches to sync for node config
I0407 12:14:56.039186 1 shared_informer.go:320] Caches are synced for endpoint slice config
I0407 12:14:56.039207 1 shared_informer.go:320] Caches are synced for service config
I0407 12:14:56.039546 1 shared_informer.go:320] Caches are synced for node config
==> kube-proxy [b898d99e3b3aee5f757bec32bb79c332b238d51a04141a850105b9d32fa9c806] <==
add table ip kube-proxy
^^^^^^^^^^^^^^^^^^^^^^^^
>
E0407 12:15:43.204279 1 proxier.go:733] "Error cleaning up nftables rules" err=<
could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
add table ip6 kube-proxy
^^^^^^^^^^^^^^^^^^^^^^^^^
>
I0407 12:15:43.212458 1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.145"]
E0407 12:15:43.214596 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I0407 12:15:43.250465 1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
I0407 12:15:43.250509 1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I0407 12:15:43.250562 1 server_linux.go:170] "Using iptables Proxier"
I0407 12:15:43.253488 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I0407 12:15:43.254355 1 server.go:497] "Version info" version="v1.32.2"
I0407 12:15:43.254810 1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0407 12:15:43.256475 1 config.go:199] "Starting service config controller"
I0407 12:15:43.256604 1 shared_informer.go:313] Waiting for caches to sync for service config
I0407 12:15:43.256711 1 config.go:105] "Starting endpoint slice config controller"
I0407 12:15:43.256789 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0407 12:15:43.257355 1 config.go:329] "Starting node config controller"
I0407 12:15:43.257484 1 shared_informer.go:313] Waiting for caches to sync for node config
I0407 12:15:43.356835 1 shared_informer.go:320] Caches are synced for service config
I0407 12:15:43.356857 1 shared_informer.go:320] Caches are synced for endpoint slice config
I0407 12:15:43.358412 1 shared_informer.go:320] Caches are synced for node config
==> kube-scheduler [7ba8dc3a6e3d950457e13fa1de82fb9af24b1d8c4d472ac6ee19468481ed7704] <==
I0407 12:14:43.229503 1 serving.go:386] Generated self-signed cert in-memory
W0407 12:14:45.132004 1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0407 12:14:45.132236 1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0407 12:14:45.132370 1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
W0407 12:14:45.132491 1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0407 12:14:45.196357 1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.2"
I0407 12:14:45.196973 1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0407 12:14:45.199386 1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
I0407 12:14:45.200823 1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
I0407 12:14:45.200874 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0407 12:14:45.215272 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0407 12:14:45.315667 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0407 12:15:31.430755 1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
I0407 12:15:31.430807 1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
E0407 12:15:31.430904 1 run.go:72] "command failed" err="finished without leader elect"
==> kube-scheduler [e5eb6664340a46da962e89d1f288f990dd273a7ad114b57946ab211c85a13e31] <==
I0407 12:15:39.745192 1 serving.go:386] Generated self-signed cert in-memory
I0407 12:15:41.900823 1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.2"
I0407 12:15:41.900860 1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0407 12:15:41.906065 1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
I0407 12:15:41.906101 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
I0407 12:15:41.906173 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0407 12:15:41.906311 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0407 12:15:41.906479 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I0407 12:15:41.906551 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0407 12:15:41.906721 1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
I0407 12:15:41.906866 1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
I0407 12:15:42.006964 1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
I0407 12:15:42.007242 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0407 12:15:42.007418 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
==> kubelet <==
Apr 07 12:15:44 functional-233546 kubelet[4328]: I0407 12:15:44.519642 4328 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0583eb23f45b569f2d8f32705a3ca5a" path="/var/lib/kubelet/pods/a0583eb23f45b569f2d8f32705a3ca5a/volumes"
Apr 07 12:15:58 functional-233546 kubelet[4328]: I0407 12:15:58.517559 4328 scope.go:117] "RemoveContainer" containerID="4df8c896b48531f0b62efc15f398a8514a51465d56e4e7f1fa868a1175e38bd3"
Apr 07 12:16:04 functional-233546 kubelet[4328]: I0407 12:16:04.451226 4328 memory_manager.go:355] "RemoveStaleState removing state" podUID="a0583eb23f45b569f2d8f32705a3ca5a" containerName="kube-apiserver"
Apr 07 12:16:04 functional-233546 kubelet[4328]: W0407 12:16:04.454103 4328 reflector.go:569] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:functional-233546" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'functional-233546' and this object
Apr 07 12:16:04 functional-233546 kubelet[4328]: E0407 12:16:04.454213 4328 reflector.go:166] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:functional-233546\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'functional-233546' and this object" logger="UnhandledError"
Apr 07 12:16:04 functional-233546 kubelet[4328]: I0407 12:16:04.454263 4328 status_manager.go:890] "Failed to get status for pod" podUID="29c269d3-92fc-4c73-92af-9513fa556724" pod="default/invalid-svc" err="pods \"invalid-svc\" is forbidden: User \"system:node:functional-233546\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'functional-233546' and this object"
Apr 07 12:16:04 functional-233546 kubelet[4328]: I0407 12:16:04.539540 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4hpg\" (UniqueName: \"kubernetes.io/projected/29c269d3-92fc-4c73-92af-9513fa556724-kube-api-access-k4hpg\") pod \"invalid-svc\" (UID: \"29c269d3-92fc-4c73-92af-9513fa556724\") " pod="default/invalid-svc"
Apr 07 12:16:05 functional-233546 kubelet[4328]: I0407 12:16:05.397815 4328 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
Apr 07 12:16:05 functional-233546 kubelet[4328]: E0407 12:16:05.961229 4328 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nonexistingimage:latest\": failed to resolve reference \"docker.io/library/nonexistingimage:latest\": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed" image="nonexistingimage:latest"
Apr 07 12:16:05 functional-233546 kubelet[4328]: E0407 12:16:05.961301 4328 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/nonexistingimage:latest\": failed to resolve reference \"docker.io/library/nonexistingimage:latest\": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed" image="nonexistingimage:latest"
Apr 07 12:16:05 functional-233546 kubelet[4328]: E0407 12:16:05.961445 4328 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:nginx,Image:nonexistingimage:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k4hpg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod invalid-svc_d
efault(29c269d3-92fc-4c73-92af-9513fa556724): ErrImagePull: failed to pull and unpack image \"docker.io/library/nonexistingimage:latest\": failed to resolve reference \"docker.io/library/nonexistingimage:latest\": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed" logger="UnhandledError"
Apr 07 12:16:05 functional-233546 kubelet[4328]: E0407 12:16:05.963556 4328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/nonexistingimage:latest\\\": failed to resolve reference \\\"docker.io/library/nonexistingimage:latest\\\": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed\"" pod="default/invalid-svc" podUID="29c269d3-92fc-4c73-92af-9513fa556724"
Apr 07 12:16:06 functional-233546 kubelet[4328]: E0407 12:16:06.695947 4328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nonexistingimage:latest\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nonexistingimage:latest\\\": failed to resolve reference \\\"docker.io/library/nonexistingimage:latest\\\": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed\"" pod="default/invalid-svc" podUID="29c269d3-92fc-4c73-92af-9513fa556724"
Apr 07 12:16:08 functional-233546 kubelet[4328]: I0407 12:16:08.066708 4328 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4hpg\" (UniqueName: \"kubernetes.io/projected/29c269d3-92fc-4c73-92af-9513fa556724-kube-api-access-k4hpg\") pod \"29c269d3-92fc-4c73-92af-9513fa556724\" (UID: \"29c269d3-92fc-4c73-92af-9513fa556724\") "
Apr 07 12:16:08 functional-233546 kubelet[4328]: I0407 12:16:08.069336 4328 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29c269d3-92fc-4c73-92af-9513fa556724-kube-api-access-k4hpg" (OuterVolumeSpecName: "kube-api-access-k4hpg") pod "29c269d3-92fc-4c73-92af-9513fa556724" (UID: "29c269d3-92fc-4c73-92af-9513fa556724"). InnerVolumeSpecName "kube-api-access-k4hpg". PluginName "kubernetes.io/projected", VolumeGIDValue ""
Apr 07 12:16:08 functional-233546 kubelet[4328]: I0407 12:16:08.167284 4328 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-k4hpg\" (UniqueName: \"kubernetes.io/projected/29c269d3-92fc-4c73-92af-9513fa556724-kube-api-access-k4hpg\") on node \"functional-233546\" DevicePath \"\""
Apr 07 12:16:08 functional-233546 kubelet[4328]: I0407 12:16:08.871904 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv864\" (UniqueName: \"kubernetes.io/projected/b7195cd8-f289-4379-844f-9bb4e80bf697-kube-api-access-mv864\") pod \"hello-node-fcfd88b6f-2mtcl\" (UID: \"b7195cd8-f289-4379-844f-9bb4e80bf697\") " pod="default/hello-node-fcfd88b6f-2mtcl"
Apr 07 12:16:10 functional-233546 kubelet[4328]: I0407 12:16:10.519192 4328 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29c269d3-92fc-4c73-92af-9513fa556724" path="/var/lib/kubelet/pods/29c269d3-92fc-4c73-92af-9513fa556724/volumes"
Apr 07 12:16:11 functional-233546 kubelet[4328]: I0407 12:16:11.594022 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/2a78a4ff-016a-48f2-a823-a55d7246439a-tmp-volume\") pod \"kubernetes-dashboard-7779f9b69b-4xpfh\" (UID: \"2a78a4ff-016a-48f2-a823-a55d7246439a\") " pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-4xpfh"
Apr 07 12:16:11 functional-233546 kubelet[4328]: I0407 12:16:11.594060 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/b30f4bda-f591-4591-a991-b90b12032927-tmp-volume\") pod \"dashboard-metrics-scraper-5d59dccf9b-ccc5d\" (UID: \"b30f4bda-f591-4591-a991-b90b12032927\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b-ccc5d"
Apr 07 12:16:11 functional-233546 kubelet[4328]: I0407 12:16:11.594081 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82llr\" (UniqueName: \"kubernetes.io/projected/2a78a4ff-016a-48f2-a823-a55d7246439a-kube-api-access-82llr\") pod \"kubernetes-dashboard-7779f9b69b-4xpfh\" (UID: \"2a78a4ff-016a-48f2-a823-a55d7246439a\") " pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-4xpfh"
Apr 07 12:16:11 functional-233546 kubelet[4328]: I0407 12:16:11.594098 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28r8c\" (UniqueName: \"kubernetes.io/projected/b30f4bda-f591-4591-a991-b90b12032927-kube-api-access-28r8c\") pod \"dashboard-metrics-scraper-5d59dccf9b-ccc5d\" (UID: \"b30f4bda-f591-4591-a991-b90b12032927\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b-ccc5d"
Apr 07 12:16:13 functional-233546 kubelet[4328]: I0407 12:16:13.611043 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/9aa5358a-1712-4b82-b3a2-6dc42c0336d6-test-volume\") pod \"busybox-mount\" (UID: \"9aa5358a-1712-4b82-b3a2-6dc42c0336d6\") " pod="default/busybox-mount"
Apr 07 12:16:13 functional-233546 kubelet[4328]: I0407 12:16:13.611101 4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwvpq\" (UniqueName: \"kubernetes.io/projected/9aa5358a-1712-4b82-b3a2-6dc42c0336d6-kube-api-access-rwvpq\") pod \"busybox-mount\" (UID: \"9aa5358a-1712-4b82-b3a2-6dc42c0336d6\") " pod="default/busybox-mount"
Apr 07 12:16:13 functional-233546 kubelet[4328]: I0407 12:16:13.746406 4328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-node-fcfd88b6f-2mtcl" podStartSLOduration=2.463927751 podStartE2EDuration="5.746387883s" podCreationTimestamp="2025-04-07 12:16:08 +0000 UTC" firstStartedPulling="2025-04-07 12:16:09.488296522 +0000 UTC m=+31.124098196" lastFinishedPulling="2025-04-07 12:16:12.770756655 +0000 UTC m=+34.406558328" observedRunningTime="2025-04-07 12:16:13.746023269 +0000 UTC m=+35.381824946" watchObservedRunningTime="2025-04-07 12:16:13.746387883 +0000 UTC m=+35.382189561"
==> storage-provisioner [3426251924490aaabb73e7a36a34b1110436bf9525d16b588c4ed29b12c0a4eb] <==
I0407 12:15:58.722622 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0407 12:15:58.730053 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0407 12:15:58.730290 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
==> storage-provisioner [4df8c896b48531f0b62efc15f398a8514a51465d56e4e7f1fa868a1175e38bd3] <==
I0407 12:15:42.941091 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F0407 12:15:42.946655 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-233546 -n functional-233546
helpers_test.go:261: (dbg) Run: kubectl --context functional-233546 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount dashboard-metrics-scraper-5d59dccf9b-ccc5d kubernetes-dashboard-7779f9b69b-4xpfh
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context functional-233546 describe pod busybox-mount dashboard-metrics-scraper-5d59dccf9b-ccc5d kubernetes-dashboard-7779f9b69b-4xpfh
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-233546 describe pod busybox-mount dashboard-metrics-scraper-5d59dccf9b-ccc5d kubernetes-dashboard-7779f9b69b-4xpfh: exit status 1 (169.376223ms)
-- stdout --
Name: busybox-mount
Namespace: default
Priority: 0
Service Account: default
Node: functional-233546/192.168.39.145
Start Time: Mon, 07 Apr 2025 12:16:13 +0000
Labels: integration-test=busybox-mount
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Containers:
mount-munger:
Container ID:
Image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID:
Port: <none>
Host Port: <none>
Command:
/bin/sh
-c
--
Args:
cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/mount-9p from test-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rwvpq (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
test-volume:
Type: HostPath (bare host directory volume)
Path: /mount-9p
HostPathType:
kube-api-access-rwvpq:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 1s default-scheduler Successfully assigned default/busybox-mount to functional-233546
Normal Pulling 0s kubelet Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
-- /stdout --
** stderr **
Error from server (NotFound): pods "dashboard-metrics-scraper-5d59dccf9b-ccc5d" not found
Error from server (NotFound): pods "kubernetes-dashboard-7779f9b69b-4xpfh" not found
** /stderr **
helpers_test.go:279: kubectl --context functional-233546 describe pod busybox-mount dashboard-metrics-scraper-5d59dccf9b-ccc5d kubernetes-dashboard-7779f9b69b-4xpfh: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (5.24s)