=== RUN TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd
=== CONT TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-380530 --alsologtostderr -v=1]
functional_test.go:914: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-380530 --alsologtostderr -v=1] ...
functional_test.go:906: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-380530 --alsologtostderr -v=1] stdout:
functional_test.go:906: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-380530 --alsologtostderr -v=1] stderr:
I0103 19:06:05.500636 22404 out.go:296] Setting OutFile to fd 1 ...
I0103 19:06:05.500785 22404 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0103 19:06:05.500794 22404 out.go:309] Setting ErrFile to fd 2...
I0103 19:06:05.500798 22404 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0103 19:06:05.500993 22404 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-9089/.minikube/bin
I0103 19:06:05.501262 22404 mustload.go:65] Loading cluster: functional-380530
I0103 19:06:05.501660 22404 config.go:182] Loaded profile config "functional-380530": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0103 19:06:05.502026 22404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0103 19:06:05.502068 22404 main.go:141] libmachine: Launching plugin server for driver kvm2
I0103 19:06:05.516451 22404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44969
I0103 19:06:05.516977 22404 main.go:141] libmachine: () Calling .GetVersion
I0103 19:06:05.517659 22404 main.go:141] libmachine: Using API Version 1
I0103 19:06:05.517678 22404 main.go:141] libmachine: () Calling .SetConfigRaw
I0103 19:06:05.518130 22404 main.go:141] libmachine: () Calling .GetMachineName
I0103 19:06:05.518384 22404 main.go:141] libmachine: (functional-380530) Calling .GetState
I0103 19:06:05.520051 22404 host.go:66] Checking if "functional-380530" exists ...
I0103 19:06:05.520331 22404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0103 19:06:05.520374 22404 main.go:141] libmachine: Launching plugin server for driver kvm2
I0103 19:06:05.534640 22404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33915
I0103 19:06:05.535059 22404 main.go:141] libmachine: () Calling .GetVersion
I0103 19:06:05.535557 22404 main.go:141] libmachine: Using API Version 1
I0103 19:06:05.535612 22404 main.go:141] libmachine: () Calling .SetConfigRaw
I0103 19:06:05.536026 22404 main.go:141] libmachine: () Calling .GetMachineName
I0103 19:06:05.536229 22404 main.go:141] libmachine: (functional-380530) Calling .DriverName
I0103 19:06:05.536410 22404 api_server.go:166] Checking apiserver status ...
I0103 19:06:05.536473 22404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0103 19:06:05.536501 22404 main.go:141] libmachine: (functional-380530) Calling .GetSSHHostname
I0103 19:06:05.539017 22404 main.go:141] libmachine: (functional-380530) DBG | domain functional-380530 has defined MAC address 52:54:00:cc:1c:df in network mk-functional-380530
I0103 19:06:05.539394 22404 main.go:141] libmachine: (functional-380530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:1c:df", ip: ""} in network mk-functional-380530: {Iface:virbr1 ExpiryTime:2024-01-03 20:03:35 +0000 UTC Type:0 Mac:52:54:00:cc:1c:df Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:functional-380530 Clientid:01:52:54:00:cc:1c:df}
I0103 19:06:05.539432 22404 main.go:141] libmachine: (functional-380530) DBG | domain functional-380530 has defined IP address 192.168.39.158 and MAC address 52:54:00:cc:1c:df in network mk-functional-380530
I0103 19:06:05.539528 22404 main.go:141] libmachine: (functional-380530) Calling .GetSSHPort
I0103 19:06:05.539698 22404 main.go:141] libmachine: (functional-380530) Calling .GetSSHKeyPath
I0103 19:06:05.539848 22404 main.go:141] libmachine: (functional-380530) Calling .GetSSHUsername
I0103 19:06:05.539977 22404 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9089/.minikube/machines/functional-380530/id_rsa Username:docker}
I0103 19:06:05.651849 22404 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/7572/cgroup
I0103 19:06:05.665065 22404 api_server.go:182] apiserver freezer: "11:freezer:/kubepods/burstable/pod0d8794ffbc22a5a59f66d6f71ff86ef5/f44fef3eb9f24b5cc9aa5d9ddbc98910713fba63e09f08bd4b227b50e7b15428"
I0103 19:06:05.665125 22404 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod0d8794ffbc22a5a59f66d6f71ff86ef5/f44fef3eb9f24b5cc9aa5d9ddbc98910713fba63e09f08bd4b227b50e7b15428/freezer.state
I0103 19:06:05.675363 22404 api_server.go:204] freezer state: "THAWED"
I0103 19:06:05.675385 22404 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8441/healthz ...
I0103 19:06:05.681578 22404 api_server.go:279] https://192.168.39.158:8441/healthz returned 200:
ok
W0103 19:06:05.681624 22404 out.go:239] * Enabling dashboard ...
* Enabling dashboard ...
I0103 19:06:05.681852 22404 config.go:182] Loaded profile config "functional-380530": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0103 19:06:05.681873 22404 addons.go:69] Setting dashboard=true in profile "functional-380530"
I0103 19:06:05.681885 22404 addons.go:237] Setting addon dashboard=true in "functional-380530"
I0103 19:06:05.681922 22404 host.go:66] Checking if "functional-380530" exists ...
I0103 19:06:05.682191 22404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0103 19:06:05.682225 22404 main.go:141] libmachine: Launching plugin server for driver kvm2
I0103 19:06:05.696504 22404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46461
I0103 19:06:05.696880 22404 main.go:141] libmachine: () Calling .GetVersion
I0103 19:06:05.697402 22404 main.go:141] libmachine: Using API Version 1
I0103 19:06:05.697427 22404 main.go:141] libmachine: () Calling .SetConfigRaw
I0103 19:06:05.697737 22404 main.go:141] libmachine: () Calling .GetMachineName
I0103 19:06:05.698176 22404 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0103 19:06:05.698231 22404 main.go:141] libmachine: Launching plugin server for driver kvm2
I0103 19:06:05.712218 22404 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39919
I0103 19:06:05.712578 22404 main.go:141] libmachine: () Calling .GetVersion
I0103 19:06:05.712974 22404 main.go:141] libmachine: Using API Version 1
I0103 19:06:05.713002 22404 main.go:141] libmachine: () Calling .SetConfigRaw
I0103 19:06:05.713313 22404 main.go:141] libmachine: () Calling .GetMachineName
I0103 19:06:05.713479 22404 main.go:141] libmachine: (functional-380530) Calling .GetState
I0103 19:06:05.714974 22404 main.go:141] libmachine: (functional-380530) Calling .DriverName
I0103 19:06:05.717690 22404 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0103 19:06:05.719231 22404 out.go:177] - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I0103 19:06:05.720759 22404 addons.go:429] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0103 19:06:05.720781 22404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0103 19:06:05.720808 22404 main.go:141] libmachine: (functional-380530) Calling .GetSSHHostname
I0103 19:06:05.723896 22404 main.go:141] libmachine: (functional-380530) DBG | domain functional-380530 has defined MAC address 52:54:00:cc:1c:df in network mk-functional-380530
I0103 19:06:05.724365 22404 main.go:141] libmachine: (functional-380530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:1c:df", ip: ""} in network mk-functional-380530: {Iface:virbr1 ExpiryTime:2024-01-03 20:03:35 +0000 UTC Type:0 Mac:52:54:00:cc:1c:df Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:functional-380530 Clientid:01:52:54:00:cc:1c:df}
I0103 19:06:05.724394 22404 main.go:141] libmachine: (functional-380530) DBG | domain functional-380530 has defined IP address 192.168.39.158 and MAC address 52:54:00:cc:1c:df in network mk-functional-380530
I0103 19:06:05.724565 22404 main.go:141] libmachine: (functional-380530) Calling .GetSSHPort
I0103 19:06:05.724760 22404 main.go:141] libmachine: (functional-380530) Calling .GetSSHKeyPath
I0103 19:06:05.724921 22404 main.go:141] libmachine: (functional-380530) Calling .GetSSHUsername
I0103 19:06:05.725053 22404 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17885-9089/.minikube/machines/functional-380530/id_rsa Username:docker}
I0103 19:06:05.857386 22404 addons.go:429] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0103 19:06:05.857411 22404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0103 19:06:05.876772 22404 addons.go:429] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0103 19:06:05.876797 22404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0103 19:06:05.894612 22404 addons.go:429] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0103 19:06:05.894634 22404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0103 19:06:05.912026 22404 addons.go:429] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0103 19:06:05.912051 22404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I0103 19:06:05.932510 22404 addons.go:429] installing /etc/kubernetes/addons/dashboard-role.yaml
I0103 19:06:05.932574 22404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0103 19:06:05.949619 22404 addons.go:429] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0103 19:06:05.949648 22404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0103 19:06:05.972565 22404 addons.go:429] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0103 19:06:05.972584 22404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0103 19:06:06.006022 22404 addons.go:429] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0103 19:06:06.006045 22404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0103 19:06:06.044757 22404 addons.go:429] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0103 19:06:06.044786 22404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0103 19:06:06.070348 22404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0103 19:06:07.701227 22404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.630807798s)
I0103 19:06:07.701306 22404 main.go:141] libmachine: Making call to close driver server
I0103 19:06:07.701322 22404 main.go:141] libmachine: (functional-380530) Calling .Close
I0103 19:06:07.701618 22404 main.go:141] libmachine: Successfully made call to close driver server
I0103 19:06:07.701643 22404 main.go:141] libmachine: Making call to close connection to plugin binary
I0103 19:06:07.701658 22404 main.go:141] libmachine: Making call to close driver server
I0103 19:06:07.701668 22404 main.go:141] libmachine: (functional-380530) Calling .Close
I0103 19:06:07.701882 22404 main.go:141] libmachine: (functional-380530) DBG | Closing plugin on server side
I0103 19:06:07.701926 22404 main.go:141] libmachine: Successfully made call to close driver server
I0103 19:06:07.701947 22404 main.go:141] libmachine: Making call to close connection to plugin binary
I0103 19:06:07.703899 22404 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p functional-380530 addons enable metrics-server
I0103 19:06:07.705554 22404 addons.go:200] Writing out "functional-380530" config to set dashboard=true...
W0103 19:06:07.705794 22404 out.go:239] * Verifying dashboard health ...
* Verifying dashboard health ...
I0103 19:06:07.706492 22404 kapi.go:59] client config for functional-380530: &rest.Config{Host:"https://192.168.39.158:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17885-9089/.minikube/profiles/functional-380530/client.crt", KeyFile:"/home/jenkins/minikube-integration/17885-9089/.minikube/profiles/functional-380530/client.key", CAFile:"/home/jenkins/minikube-integration/17885-9089/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c20060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0103 19:06:07.730700 22404 service.go:214] Found service: &Service{ObjectMeta:{kubernetes-dashboard kubernetes-dashboard e09b2df6-53ac-489b-9d28-fd7af5b42236 719 0 2024-01-03 19:06:07 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2024-01-03 19:06:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.111.32.174,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.111.32.174],IPFamilies:[IPv4],AllocateLoadBalance
rNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W0103 19:06:07.730840 22404 out.go:239] * Launching proxy ...
* Launching proxy ...
I0103 19:06:07.730900 22404 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-380530 proxy --port 36195]
I0103 19:06:07.731164 22404 dashboard.go:157] Waiting for kubectl to output host:port ...
I0103 19:06:07.785904 22404 out.go:177]
W0103 19:06:07.788009 22404 out.go:239] X Exiting due to HOST_KUBECTL_PROXY: kubectl proxy: readByteWithTimeout: EOF
X Exiting due to HOST_KUBECTL_PROXY: kubectl proxy: readByteWithTimeout: EOF
W0103 19:06:07.788028 22404 out.go:239] *
*
W0103 19:06:07.790729 22404 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ * Please also attach the following file to the GitHub issue: │
│ * - /tmp/minikube_dashboard_2f9e80c8c4dc47927ad6915561a20c5705c3b3b4_0.log │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ * Please also attach the following file to the GitHub issue: │
│ * - /tmp/minikube_dashboard_2f9e80c8c4dc47927ad6915561a20c5705c3b3b4_0.log │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0103 19:06:07.792309 22404 out.go:177]
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p functional-380530 -n functional-380530
helpers_test.go:244: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p functional-380530 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-380530 logs -n 25: (2.059927186s)
helpers_test.go:252: TestFunctional/parallel/DashboardCmd logs:
-- stdout --
==> Audit <==
|-----------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|-----------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
| cache | delete | minikube | jenkins | v1.32.0 | 03 Jan 24 19:05 UTC | 03 Jan 24 19:05 UTC |
| | registry.k8s.io/pause:latest | | | | | |
| kubectl | functional-380530 kubectl -- | functional-380530 | jenkins | v1.32.0 | 03 Jan 24 19:05 UTC | 03 Jan 24 19:05 UTC |
| | --context functional-380530 | | | | | |
| | get pods | | | | | |
| start | -p functional-380530 | functional-380530 | jenkins | v1.32.0 | 03 Jan 24 19:05 UTC | 03 Jan 24 19:05 UTC |
| | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision | | | | | |
| | --wait=all | | | | | |
| service | invalid-svc -p | functional-380530 | jenkins | v1.32.0 | 03 Jan 24 19:06 UTC | |
| | functional-380530 | | | | | |
| cp | functional-380530 cp | functional-380530 | jenkins | v1.32.0 | 03 Jan 24 19:06 UTC | 03 Jan 24 19:06 UTC |
| | testdata/cp-test.txt | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| config | functional-380530 config unset | functional-380530 | jenkins | v1.32.0 | 03 Jan 24 19:06 UTC | 03 Jan 24 19:06 UTC |
| | cpus | | | | | |
| config | functional-380530 config get | functional-380530 | jenkins | v1.32.0 | 03 Jan 24 19:06 UTC | |
| | cpus | | | | | |
| config | functional-380530 config set | functional-380530 | jenkins | v1.32.0 | 03 Jan 24 19:06 UTC | 03 Jan 24 19:06 UTC |
| | cpus 2 | | | | | |
| config | functional-380530 config get | functional-380530 | jenkins | v1.32.0 | 03 Jan 24 19:06 UTC | 03 Jan 24 19:06 UTC |
| | cpus | | | | | |
| ssh | functional-380530 ssh -n | functional-380530 | jenkins | v1.32.0 | 03 Jan 24 19:06 UTC | 03 Jan 24 19:06 UTC |
| | functional-380530 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| config | functional-380530 config unset | functional-380530 | jenkins | v1.32.0 | 03 Jan 24 19:06 UTC | 03 Jan 24 19:06 UTC |
| | cpus | | | | | |
| config | functional-380530 config get | functional-380530 | jenkins | v1.32.0 | 03 Jan 24 19:06 UTC | |
| | cpus | | | | | |
| cp | functional-380530 cp | functional-380530 | jenkins | v1.32.0 | 03 Jan 24 19:06 UTC | 03 Jan 24 19:06 UTC |
| | functional-380530:/home/docker/cp-test.txt | | | | | |
| | /tmp/TestFunctionalparallelCpCmd2671486142/001/cp-test.txt | | | | | |
| ssh | functional-380530 ssh -n | functional-380530 | jenkins | v1.32.0 | 03 Jan 24 19:06 UTC | 03 Jan 24 19:06 UTC |
| | functional-380530 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | functional-380530 cp | functional-380530 | jenkins | v1.32.0 | 03 Jan 24 19:06 UTC | 03 Jan 24 19:06 UTC |
| | testdata/cp-test.txt | | | | | |
| | /tmp/does/not/exist/cp-test.txt | | | | | |
| ssh | functional-380530 ssh -n | functional-380530 | jenkins | v1.32.0 | 03 Jan 24 19:06 UTC | 03 Jan 24 19:06 UTC |
| | functional-380530 sudo cat | | | | | |
| | /tmp/does/not/exist/cp-test.txt | | | | | |
| ssh | functional-380530 ssh findmnt | functional-380530 | jenkins | v1.32.0 | 03 Jan 24 19:06 UTC | |
| | -T /mount-9p | grep 9p | | | | | |
| mount | -p functional-380530 | functional-380530 | jenkins | v1.32.0 | 03 Jan 24 19:06 UTC | |
| | /tmp/TestFunctionalparallelMountCmdany-port241529613/001:/mount-9p | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| ssh | functional-380530 ssh findmnt | functional-380530 | jenkins | v1.32.0 | 03 Jan 24 19:06 UTC | 03 Jan 24 19:06 UTC |
| | -T /mount-9p | grep 9p | | | | | |
| ssh | functional-380530 ssh -- ls | functional-380530 | jenkins | v1.32.0 | 03 Jan 24 19:06 UTC | 03 Jan 24 19:06 UTC |
| | -la /mount-9p | | | | | |
| start | -p functional-380530 | functional-380530 | jenkins | v1.32.0 | 03 Jan 24 19:06 UTC | |
| | --dry-run --memory | | | | | |
| | 250MB --alsologtostderr | | | | | |
| | --driver=kvm2 | | | | | |
| ssh | functional-380530 ssh cat | functional-380530 | jenkins | v1.32.0 | 03 Jan 24 19:06 UTC | 03 Jan 24 19:06 UTC |
| | /mount-9p/test-1704308763950044379 | | | | | |
| start | -p functional-380530 | functional-380530 | jenkins | v1.32.0 | 03 Jan 24 19:06 UTC | |
| | --dry-run --memory | | | | | |
| | 250MB --alsologtostderr | | | | | |
| | --driver=kvm2 | | | | | |
| start | -p functional-380530 --dry-run | functional-380530 | jenkins | v1.32.0 | 03 Jan 24 19:06 UTC | |
| | --alsologtostderr -v=1 | | | | | |
| | --driver=kvm2 | | | | | |
| dashboard | --url --port 36195 | functional-380530 | jenkins | v1.32.0 | 03 Jan 24 19:06 UTC | |
| | -p functional-380530 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
|-----------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/01/03 19:06:05
Running on machine: ubuntu-20-agent-7
Binary: Built with gc go1.21.5 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0103 19:06:05.354751 22377 out.go:296] Setting OutFile to fd 1 ...
I0103 19:06:05.354892 22377 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0103 19:06:05.354902 22377 out.go:309] Setting ErrFile to fd 2...
I0103 19:06:05.354909 22377 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0103 19:06:05.355094 22377 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17885-9089/.minikube/bin
I0103 19:06:05.355623 22377 out.go:303] Setting JSON to false
I0103 19:06:05.356554 22377 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":2916,"bootTime":1704305849,"procs":245,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0103 19:06:05.356624 22377 start.go:138] virtualization: kvm guest
I0103 19:06:05.359199 22377 out.go:177] * [functional-380530] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
I0103 19:06:05.360785 22377 out.go:177] - MINIKUBE_LOCATION=17885
I0103 19:06:05.362181 22377 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0103 19:06:05.360787 22377 notify.go:220] Checking for updates...
I0103 19:06:05.364088 22377 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/17885-9089/kubeconfig
I0103 19:06:05.365718 22377 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/17885-9089/.minikube
I0103 19:06:05.367075 22377 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0103 19:06:05.368506 22377 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0103 19:06:05.370662 22377 config.go:182] Loaded profile config "functional-380530": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0103 19:06:05.371233 22377 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0103 19:06:05.371290 22377 main.go:141] libmachine: Launching plugin server for driver kvm2
I0103 19:06:05.386352 22377 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36229
I0103 19:06:05.386784 22377 main.go:141] libmachine: () Calling .GetVersion
I0103 19:06:05.387315 22377 main.go:141] libmachine: Using API Version 1
I0103 19:06:05.387366 22377 main.go:141] libmachine: () Calling .SetConfigRaw
I0103 19:06:05.387729 22377 main.go:141] libmachine: () Calling .GetMachineName
I0103 19:06:05.387939 22377 main.go:141] libmachine: (functional-380530) Calling .DriverName
I0103 19:06:05.388188 22377 driver.go:392] Setting default libvirt URI to qemu:///system
I0103 19:06:05.388606 22377 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0103 19:06:05.388657 22377 main.go:141] libmachine: Launching plugin server for driver kvm2
I0103 19:06:05.403704 22377 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45367
I0103 19:06:05.404126 22377 main.go:141] libmachine: () Calling .GetVersion
I0103 19:06:05.404646 22377 main.go:141] libmachine: Using API Version 1
I0103 19:06:05.404677 22377 main.go:141] libmachine: () Calling .SetConfigRaw
I0103 19:06:05.405028 22377 main.go:141] libmachine: () Calling .GetMachineName
I0103 19:06:05.405245 22377 main.go:141] libmachine: (functional-380530) Calling .DriverName
I0103 19:06:05.438482 22377 out.go:177] * Using the kvm2 driver based on existing profile
I0103 19:06:05.440031 22377 start.go:298] selected driver: kvm2
I0103 19:06:05.440046 22377 start.go:902] validating driver "kvm2" against &{Name:functional-380530 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-380530 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.158 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 C
ertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
I0103 19:06:05.440156 22377 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0103 19:06:05.441319 22377 cni.go:84] Creating CNI manager for ""
I0103 19:06:05.441343 22377 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0103 19:06:05.441353 22377 start_flags.go:323] config:
{Name:functional-380530 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-380530 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.158 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
I0103 19:06:05.443016 22377 out.go:177] * dry-run validation complete!
==> Docker <==
-- Journal begins at Wed 2024-01-03 19:03:32 UTC, ends at Wed 2024-01-03 19:06:08 UTC. --
Jan 03 19:05:58 functional-380530 cri-dockerd[6707]: time="2024-01-03T19:05:58Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/063ce6488938c0e1656e874108ce57cc22528c5eb5efe3a73177929ee3b0dc5d/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
Jan 03 19:05:59 functional-380530 dockerd[6407]: time="2024-01-03T19:05:59.871642639Z" level=error msg="Not continuing with pull after error: errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n"
Jan 03 19:05:59 functional-380530 dockerd[6407]: time="2024-01-03T19:05:59.871695070Z" level=info msg="Ignoring extra error returned from registry" error="unauthorized: authentication required"
Jan 03 19:06:01 functional-380530 dockerd[6407]: time="2024-01-03T19:06:01.628050536Z" level=info msg="ignoring event" container=063ce6488938c0e1656e874108ce57cc22528c5eb5efe3a73177929ee3b0dc5d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 03 19:06:01 functional-380530 dockerd[6413]: time="2024-01-03T19:06:01.628712514Z" level=info msg="shim disconnected" id=063ce6488938c0e1656e874108ce57cc22528c5eb5efe3a73177929ee3b0dc5d namespace=moby
Jan 03 19:06:01 functional-380530 dockerd[6413]: time="2024-01-03T19:06:01.628766728Z" level=warning msg="cleaning up after shim disconnected" id=063ce6488938c0e1656e874108ce57cc22528c5eb5efe3a73177929ee3b0dc5d namespace=moby
Jan 03 19:06:01 functional-380530 dockerd[6413]: time="2024-01-03T19:06:01.628775090Z" level=info msg="cleaning up dead shim" namespace=moby
Jan 03 19:06:03 functional-380530 dockerd[6413]: time="2024-01-03T19:06:03.131311005Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 03 19:06:03 functional-380530 dockerd[6413]: time="2024-01-03T19:06:03.131388554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 03 19:06:03 functional-380530 dockerd[6413]: time="2024-01-03T19:06:03.131496784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 03 19:06:03 functional-380530 dockerd[6413]: time="2024-01-03T19:06:03.131521177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 03 19:06:03 functional-380530 cri-dockerd[6707]: time="2024-01-03T19:06:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/58d0a061b969bca98eb1c9bac5ca9b660f984f246cddf474afba7cf4165cf6a7/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
Jan 03 19:06:06 functional-380530 dockerd[6413]: time="2024-01-03T19:06:06.167593655Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 03 19:06:06 functional-380530 dockerd[6413]: time="2024-01-03T19:06:06.167819320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 03 19:06:06 functional-380530 dockerd[6413]: time="2024-01-03T19:06:06.167846381Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 03 19:06:06 functional-380530 dockerd[6413]: time="2024-01-03T19:06:06.167856862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 03 19:06:06 functional-380530 cri-dockerd[6707]: time="2024-01-03T19:06:06Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5d4e713531401e92bd61da9e55a6388ebf5934dde8def7aed0331bb13cd99af1/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
Jan 03 19:06:08 functional-380530 dockerd[6413]: time="2024-01-03T19:06:08.217386620Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 03 19:06:08 functional-380530 dockerd[6413]: time="2024-01-03T19:06:08.218276124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 03 19:06:08 functional-380530 dockerd[6413]: time="2024-01-03T19:06:08.221418686Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 03 19:06:08 functional-380530 dockerd[6413]: time="2024-01-03T19:06:08.221701529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 03 19:06:08 functional-380530 dockerd[6413]: time="2024-01-03T19:06:08.244901154Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 03 19:06:08 functional-380530 dockerd[6413]: time="2024-01-03T19:06:08.245305255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 03 19:06:08 functional-380530 dockerd[6413]: time="2024-01-03T19:06:08.245558141Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 03 19:06:08 functional-380530 dockerd[6413]: time="2024-01-03T19:06:08.245767956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
c8eeea8441643 ead0a4a53df89 27 seconds ago Running coredns 2 8673ff2f07803 coredns-5dd5756b68-xtchs
40461b050c7f4 6e38f40d628db 27 seconds ago Running storage-provisioner 2 54b806314092e storage-provisioner
7690c10625874 83f6cc407eed8 28 seconds ago Running kube-proxy 2 dcbe2a16b96e4 kube-proxy-lq8mq
024def049b36e 73deb9a3f7025 33 seconds ago Running etcd 2 344fcad37c738 etcd-functional-380530
bbf1cb8c70202 d058aa5ab969c 33 seconds ago Running kube-controller-manager 2 dd64c741cf6ed kube-controller-manager-functional-380530
02b14efe05c2e e3db313c6dbc0 33 seconds ago Running kube-scheduler 2 83e4e137f5d59 kube-scheduler-functional-380530
f44fef3eb9f24 7fe0e6f37db33 33 seconds ago Running kube-apiserver 0 fe5bca46c4e68 kube-apiserver-functional-380530
48a7dc9e3b762 6e38f40d628db About a minute ago Exited storage-provisioner 1 781f90e7fd58e storage-provisioner
1885f46728099 83f6cc407eed8 About a minute ago Exited kube-proxy 1 26844fd1e7de4 kube-proxy-lq8mq
b7763ea2ccc6c e3db313c6dbc0 About a minute ago Exited kube-scheduler 1 bc1b99efe7cee kube-scheduler-functional-380530
ad17983679c15 ead0a4a53df89 About a minute ago Exited coredns 1 0907fb18bc81a coredns-5dd5756b68-xtchs
b7ffe4ad90051 73deb9a3f7025 About a minute ago Exited etcd 1 341024980c297 etcd-functional-380530
1d104fe07a093 d058aa5ab969c About a minute ago Exited kube-controller-manager 1 c6315205a6de4 kube-controller-manager-functional-380530
860383173b4b4 7fe0e6f37db33 About a minute ago Exited kube-apiserver 1 774ba996264d8 kube-apiserver-functional-380530
==> coredns [ad17983679c1] <==
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
.:53
[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
CoreDNS-1.10.1
linux/amd64, go1.20, 055b2c3
[INFO] 127.0.0.1:46075 - 26260 "HINFO IN 6418775790021642624.7469527936822377837. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.060873472s
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
==> coredns [c8eeea844164] <==
.:53
[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
CoreDNS-1.10.1
linux/amd64, go1.20, 055b2c3
[INFO] 127.0.0.1:50172 - 24498 "HINFO IN 617071357061748472.4446207476151518928. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.02026116s
==> describe nodes <==
Name: functional-380530
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=functional-380530
kubernetes.io/os=linux
minikube.k8s.io/commit=1b6a81cbc05f28310ff11df4170e79e2b8bf477a
minikube.k8s.io/name=functional-380530
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_01_03T19_04_11_0700
minikube.k8s.io/version=v1.32.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 03 Jan 2024 19:04:08 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: functional-380530
AcquireTime: <unset>
RenewTime: Wed, 03 Jan 2024 19:06:00 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 03 Jan 2024 19:05:39 +0000 Wed, 03 Jan 2024 19:04:06 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 03 Jan 2024 19:05:39 +0000 Wed, 03 Jan 2024 19:04:06 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 03 Jan 2024 19:05:39 +0000 Wed, 03 Jan 2024 19:04:06 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 03 Jan 2024 19:05:39 +0000 Wed, 03 Jan 2024 19:04:12 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.158
Hostname: functional-380530
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 3914504Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 3914504Ki
pods: 110
System Info:
Machine ID: de45c8eeaddb454e8f89731502e638f1
System UUID: de45c8ee-addb-454e-8f89-731502e638f1
Boot ID: ee204899-38f1-46ee-a2be-7771dd3526f8
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://24.0.7
Kubelet Version: v1.28.4
Kube-Proxy Version: v1.28.4
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (11 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox-mount 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4s
default hello-node-d7447cc7f-2wg4n 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 7s
kube-system coredns-5dd5756b68-xtchs 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (1%!)(MISSING) 170Mi (4%!)(MISSING) 106s
kube-system etcd-functional-380530 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (2%!)(MISSING) 0 (0%!)(MISSING) 117s
kube-system kube-apiserver-functional-380530 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 27s
kube-system kube-controller-manager-functional-380530 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 117s
kube-system kube-proxy-lq8mq 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 106s
kube-system kube-scheduler-functional-380530 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 117s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 104s
kubernetes-dashboard dashboard-metrics-scraper-7fd5cb4ddc-xtcvf 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2s
kubernetes-dashboard kubernetes-dashboard-8694d4445c-pqbmt 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (37%!)(MISSING) 0 (0%!)(MISSING)
memory 170Mi (4%!)(MISSING) 170Mi (4%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 104s kube-proxy
Normal Starting 27s kube-proxy
Normal Starting 75s kube-proxy
Normal NodeHasSufficientPID 2m6s (x7 over 2m6s) kubelet Node functional-380530 status is now: NodeHasSufficientPID
Normal NodeHasNoDiskPressure 2m6s (x8 over 2m6s) kubelet Node functional-380530 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientMemory 2m6s (x8 over 2m6s) kubelet Node functional-380530 status is now: NodeHasSufficientMemory
Normal NodeAllocatableEnforced 2m6s kubelet Updated Node Allocatable limit across pods
Normal Starting 118s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 118s kubelet Node functional-380530 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 118s kubelet Node functional-380530 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 118s kubelet Node functional-380530 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 118s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 117s kubelet Node functional-380530 status is now: NodeReady
Normal RegisteredNode 106s node-controller Node functional-380530 event: Registered Node functional-380530 in Controller
Normal NodeNotReady 97s kubelet Node functional-380530 status is now: NodeNotReady
Normal RegisteredNode 62s node-controller Node functional-380530 event: Registered Node functional-380530 in Controller
Normal Starting 35s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 35s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 34s (x8 over 35s) kubelet Node functional-380530 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 34s (x8 over 35s) kubelet Node functional-380530 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 34s (x7 over 35s) kubelet Node functional-380530 status is now: NodeHasSufficientPID
Normal RegisteredNode 17s node-controller Node functional-380530 event: Registered Node functional-380530 in Controller
==> dmesg <==
[ +7.905352] systemd-fstab-generator[2440]: Ignoring "noauto" for root device
[ +19.414799] systemd-fstab-generator[3388]: Ignoring "noauto" for root device
[ +0.283837] systemd-fstab-generator[3422]: Ignoring "noauto" for root device
[ +0.145412] systemd-fstab-generator[3433]: Ignoring "noauto" for root device
[ +0.152616] systemd-fstab-generator[3446]: Ignoring "noauto" for root device
[ +5.213380] kauditd_printk_skb: 23 callbacks suppressed
[ +6.603605] systemd-fstab-generator[4049]: Ignoring "noauto" for root device
[ +0.114375] systemd-fstab-generator[4060]: Ignoring "noauto" for root device
[ +0.097741] systemd-fstab-generator[4071]: Ignoring "noauto" for root device
[ +0.121936] systemd-fstab-generator[4082]: Ignoring "noauto" for root device
[ +0.134017] systemd-fstab-generator[4102]: Ignoring "noauto" for root device
[ +7.309849] kauditd_printk_skb: 29 callbacks suppressed
[Jan 3 19:05] systemd-fstab-generator[5925]: Ignoring "noauto" for root device
[ +0.306381] systemd-fstab-generator[5959]: Ignoring "noauto" for root device
[ +0.160570] systemd-fstab-generator[5970]: Ignoring "noauto" for root device
[ +0.166483] systemd-fstab-generator[5983]: Ignoring "noauto" for root device
[ +11.971054] systemd-fstab-generator[6582]: Ignoring "noauto" for root device
[ +0.113938] systemd-fstab-generator[6593]: Ignoring "noauto" for root device
[ +0.119124] systemd-fstab-generator[6604]: Ignoring "noauto" for root device
[ +0.114510] systemd-fstab-generator[6622]: Ignoring "noauto" for root device
[ +0.140259] systemd-fstab-generator[6642]: Ignoring "noauto" for root device
[ +2.194495] systemd-fstab-generator[7056]: Ignoring "noauto" for root device
[ +7.728022] kauditd_printk_skb: 29 callbacks suppressed
[Jan 3 19:06] kauditd_printk_skb: 11 callbacks suppressed
[ +5.576684] kauditd_printk_skb: 6 callbacks suppressed
==> etcd [024def049b36] <==
{"level":"info","ts":"2024-01-03T19:05:37.180382Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
{"level":"info","ts":"2024-01-03T19:05:37.180573Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
{"level":"info","ts":"2024-01-03T19:05:37.181012Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2e3bdcd19c3f485 switched to configuration voters=(14043276751669556357)"}
{"level":"info","ts":"2024-01-03T19:05:37.181093Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"632f2ed81879f448","local-member-id":"c2e3bdcd19c3f485","added-peer-id":"c2e3bdcd19c3f485","added-peer-peer-urls":["https://192.168.39.158:2380"]}
{"level":"info","ts":"2024-01-03T19:05:37.181289Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"632f2ed81879f448","local-member-id":"c2e3bdcd19c3f485","cluster-version":"3.5"}
{"level":"info","ts":"2024-01-03T19:05:37.181484Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2024-01-03T19:05:37.187748Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2024-01-03T19:05:37.18794Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.158:2380"}
{"level":"info","ts":"2024-01-03T19:05:37.188111Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.158:2380"}
{"level":"info","ts":"2024-01-03T19:05:37.1883Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"c2e3bdcd19c3f485","initial-advertise-peer-urls":["https://192.168.39.158:2380"],"listen-peer-urls":["https://192.168.39.158:2380"],"advertise-client-urls":["https://192.168.39.158:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.158:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2024-01-03T19:05:37.188904Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2024-01-03T19:05:38.439644Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2e3bdcd19c3f485 is starting a new election at term 3"}
{"level":"info","ts":"2024-01-03T19:05:38.439723Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2e3bdcd19c3f485 became pre-candidate at term 3"}
{"level":"info","ts":"2024-01-03T19:05:38.43974Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2e3bdcd19c3f485 received MsgPreVoteResp from c2e3bdcd19c3f485 at term 3"}
{"level":"info","ts":"2024-01-03T19:05:38.439751Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2e3bdcd19c3f485 became candidate at term 4"}
{"level":"info","ts":"2024-01-03T19:05:38.439756Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2e3bdcd19c3f485 received MsgVoteResp from c2e3bdcd19c3f485 at term 4"}
{"level":"info","ts":"2024-01-03T19:05:38.439764Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2e3bdcd19c3f485 became leader at term 4"}
{"level":"info","ts":"2024-01-03T19:05:38.439808Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c2e3bdcd19c3f485 elected leader c2e3bdcd19c3f485 at term 4"}
{"level":"info","ts":"2024-01-03T19:05:38.445907Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"c2e3bdcd19c3f485","local-member-attributes":"{Name:functional-380530 ClientURLs:[https://192.168.39.158:2379]}","request-path":"/0/members/c2e3bdcd19c3f485/attributes","cluster-id":"632f2ed81879f448","publish-timeout":"7s"}
{"level":"info","ts":"2024-01-03T19:05:38.445966Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-01-03T19:05:38.446154Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-01-03T19:05:38.44712Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2024-01-03T19:05:38.448638Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.158:2379"}
{"level":"info","ts":"2024-01-03T19:05:38.449309Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2024-01-03T19:05:38.449345Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
==> etcd [b7ffe4ad9005] <==
{"level":"info","ts":"2024-01-03T19:04:51.27995Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.158:2380"}
{"level":"info","ts":"2024-01-03T19:04:52.518533Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2e3bdcd19c3f485 is starting a new election at term 2"}
{"level":"info","ts":"2024-01-03T19:04:52.518595Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2e3bdcd19c3f485 became pre-candidate at term 2"}
{"level":"info","ts":"2024-01-03T19:04:52.518622Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2e3bdcd19c3f485 received MsgPreVoteResp from c2e3bdcd19c3f485 at term 2"}
{"level":"info","ts":"2024-01-03T19:04:52.518635Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2e3bdcd19c3f485 became candidate at term 3"}
{"level":"info","ts":"2024-01-03T19:04:52.518647Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2e3bdcd19c3f485 received MsgVoteResp from c2e3bdcd19c3f485 at term 3"}
{"level":"info","ts":"2024-01-03T19:04:52.518655Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c2e3bdcd19c3f485 became leader at term 3"}
{"level":"info","ts":"2024-01-03T19:04:52.518661Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c2e3bdcd19c3f485 elected leader c2e3bdcd19c3f485 at term 3"}
{"level":"info","ts":"2024-01-03T19:04:52.528895Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-01-03T19:04:52.529928Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.158:2379"}
{"level":"info","ts":"2024-01-03T19:04:52.530246Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-01-03T19:04:52.533102Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2024-01-03T19:04:52.52884Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"c2e3bdcd19c3f485","local-member-attributes":"{Name:functional-380530 ClientURLs:[https://192.168.39.158:2379]}","request-path":"/0/members/c2e3bdcd19c3f485/attributes","cluster-id":"632f2ed81879f448","publish-timeout":"7s"}
{"level":"info","ts":"2024-01-03T19:04:52.544617Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2024-01-03T19:04:52.54487Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2024-01-03T19:05:20.112023Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2024-01-03T19:05:20.112138Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"functional-380530","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.158:2380"],"advertise-client-urls":["https://192.168.39.158:2379"]}
{"level":"warn","ts":"2024-01-03T19:05:20.112213Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.158:2379: use of closed network connection"}
{"level":"warn","ts":"2024-01-03T19:05:20.112244Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.158:2379: use of closed network connection"}
{"level":"warn","ts":"2024-01-03T19:05:20.112311Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"warn","ts":"2024-01-03T19:05:20.112367Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"info","ts":"2024-01-03T19:05:20.139602Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"c2e3bdcd19c3f485","current-leader-member-id":"c2e3bdcd19c3f485"}
{"level":"info","ts":"2024-01-03T19:05:20.144814Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.158:2380"}
{"level":"info","ts":"2024-01-03T19:05:20.144931Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.158:2380"}
{"level":"info","ts":"2024-01-03T19:05:20.144942Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"functional-380530","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.158:2380"],"advertise-client-urls":["https://192.168.39.158:2379"]}
==> kernel <==
19:06:09 up 2 min, 0 users, load average: 1.65, 0.82, 0.32
Linux functional-380530 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2021.02.12"
==> kube-apiserver [860383173b4b] <==
W0103 19:05:29.297176 1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0103 19:05:29.322666 1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0103 19:05:29.323888 1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0103 19:05:29.333904 1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0103 19:05:29.347156 1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0103 19:05:29.426380 1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0103 19:05:29.619808 1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0103 19:05:29.622174 1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0103 19:05:29.640717 1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0103 19:05:29.667017 1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0103 19:05:29.670650 1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0103 19:05:29.683823 1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0103 19:05:29.701650 1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0103 19:05:29.714938 1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0103 19:05:29.720550 1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0103 19:05:29.742924 1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0103 19:05:29.759251 1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0103 19:05:29.778310 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0103 19:05:29.832224 1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0103 19:05:29.886245 1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0103 19:05:29.904976 1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0103 19:05:29.942386 1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0103 19:05:29.942415 1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0103 19:05:30.031165 1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0103 19:05:30.051810 1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
==> kube-apiserver [f44fef3eb9f2] <==
I0103 19:05:39.840707 1 apf_controller.go:377] Running API Priority and Fairness config worker
I0103 19:05:39.840758 1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
I0103 19:05:39.843115 1 shared_informer.go:318] Caches are synced for crd-autoregister
I0103 19:05:39.847975 1 shared_informer.go:318] Caches are synced for node_authorizer
I0103 19:05:39.850322 1 shared_informer.go:318] Caches are synced for configmaps
I0103 19:05:39.853033 1 aggregator.go:166] initial CRD sync complete...
I0103 19:05:39.853075 1 autoregister_controller.go:141] Starting autoregister controller
I0103 19:05:39.853082 1 cache.go:32] Waiting for caches to sync for autoregister controller
I0103 19:05:39.853089 1 cache.go:39] Caches are synced for autoregister controller
I0103 19:05:39.855587 1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
E0103 19:05:39.883310 1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
I0103 19:05:40.734416 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0103 19:05:41.627112 1 controller.go:624] quota admission added evaluator for: serviceaccounts
I0103 19:05:41.642115 1 controller.go:624] quota admission added evaluator for: deployments.apps
I0103 19:05:41.689249 1 controller.go:624] quota admission added evaluator for: daemonsets.apps
I0103 19:05:41.729177 1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0103 19:05:41.741382 1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0103 19:05:52.512651 1 controller.go:624] quota admission added evaluator for: endpoints
I0103 19:05:52.716732 1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0103 19:05:58.185403 1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.108.19.32"}
I0103 19:06:02.651913 1 controller.go:624] quota admission added evaluator for: replicasets.apps
I0103 19:06:02.769387 1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.107.166.23"}
I0103 19:06:07.159896 1 controller.go:624] quota admission added evaluator for: namespaces
I0103 19:06:07.539175 1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.32.174"}
I0103 19:06:07.676698 1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.217.89"}
==> kube-controller-manager [1d104fe07a09] <==
I0103 19:05:07.303743 1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
I0103 19:05:07.308846 1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
I0103 19:05:07.312845 1 shared_informer.go:318] Caches are synced for taint
I0103 19:05:07.313120 1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
I0103 19:05:07.313405 1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-380530"
I0103 19:05:07.313695 1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
I0103 19:05:07.314186 1 taint_manager.go:205] "Starting NoExecuteTaintManager"
I0103 19:05:07.317387 1 taint_manager.go:210] "Sending events to api server"
I0103 19:05:07.330619 1 shared_informer.go:318] Caches are synced for namespace
I0103 19:05:07.330837 1 shared_informer.go:318] Caches are synced for ReplicationController
I0103 19:05:07.330902 1 shared_informer.go:318] Caches are synced for GC
I0103 19:05:07.338239 1 event.go:307] "Event occurred" object="functional-380530" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-380530 event: Registered Node functional-380530 in Controller"
I0103 19:05:07.340721 1 shared_informer.go:318] Caches are synced for ephemeral
I0103 19:05:07.341299 1 shared_informer.go:318] Caches are synced for ReplicaSet
I0103 19:05:07.348616 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.063891ms"
I0103 19:05:07.351268 1 shared_informer.go:318] Caches are synced for stateful set
I0103 19:05:07.355016 1 shared_informer.go:318] Caches are synced for HPA
I0103 19:05:07.407223 1 shared_informer.go:318] Caches are synced for resource quota
I0103 19:05:07.449975 1 shared_informer.go:318] Caches are synced for persistent volume
I0103 19:05:07.455934 1 shared_informer.go:318] Caches are synced for disruption
I0103 19:05:07.458810 1 shared_informer.go:318] Caches are synced for resource quota
I0103 19:05:07.473519 1 shared_informer.go:318] Caches are synced for deployment
I0103 19:05:07.855851 1 shared_informer.go:318] Caches are synced for garbage collector
I0103 19:05:07.882609 1 shared_informer.go:318] Caches are synced for garbage collector
I0103 19:05:07.882715 1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
==> kube-controller-manager [bbf1cb8c7020] <==
E0103 19:06:07.311828 1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-8694d4445c" failed with pods "kubernetes-dashboard-8694d4445c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0103 19:06:07.312252 1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8694d4445c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
I0103 19:06:07.330293 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="18.336625ms"
E0103 19:06:07.330342 1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-8694d4445c" failed with pods "kubernetes-dashboard-8694d4445c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0103 19:06:07.330579 1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8694d4445c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
I0103 19:06:07.339379 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="8.956147ms"
E0103 19:06:07.339421 1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-8694d4445c" failed with pods "kubernetes-dashboard-8694d4445c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0103 19:06:07.339511 1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8694d4445c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
I0103 19:06:07.351130 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="8.551707ms"
E0103 19:06:07.351173 1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" failed with pods "dashboard-metrics-scraper-7fd5cb4ddc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0103 19:06:07.351208 1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-7fd5cb4ddc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
I0103 19:06:07.363952 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="12.69114ms"
E0103 19:06:07.363993 1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-8694d4445c" failed with pods "kubernetes-dashboard-8694d4445c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0103 19:06:07.364134 1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8694d4445c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
I0103 19:06:07.453969 1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-7fd5cb4ddc-xtcvf"
I0103 19:06:07.459891 1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-pqbmt"
I0103 19:06:07.484630 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="53.382615ms"
I0103 19:06:07.487024 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="42.035521ms"
I0103 19:06:07.515705 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="30.800413ms"
I0103 19:06:07.515956 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="217.791µs"
I0103 19:06:07.521228 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="34.165543ms"
I0103 19:06:07.521309 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="52.098µs"
I0103 19:06:07.580726 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="48.022µs"
I0103 19:06:07.622601 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="159.639µs"
I0103 19:06:07.921521 1 event.go:307] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'k8s.io/minikube-hostpath' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
==> kube-proxy [1885f4672809] <==
I0103 19:04:53.265551 1 server_others.go:69] "Using iptables proxy"
I0103 19:04:54.399263 1 node.go:141] Successfully retrieved node IP: 192.168.39.158
I0103 19:04:54.505508 1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
I0103 19:04:54.505547 1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I0103 19:04:54.510805 1 server_others.go:152] "Using iptables Proxier"
I0103 19:04:54.510877 1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I0103 19:04:54.511402 1 server.go:846] "Version info" version="v1.28.4"
I0103 19:04:54.511506 1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0103 19:04:54.520814 1 config.go:188] "Starting service config controller"
I0103 19:04:54.521046 1 shared_informer.go:311] Waiting for caches to sync for service config
I0103 19:04:54.521236 1 config.go:97] "Starting endpoint slice config controller"
I0103 19:04:54.521920 1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
I0103 19:04:54.521664 1 config.go:315] "Starting node config controller"
I0103 19:04:54.534770 1 shared_informer.go:311] Waiting for caches to sync for node config
I0103 19:04:54.532870 1 shared_informer.go:318] Caches are synced for endpoint slice config
I0103 19:04:54.621783 1 shared_informer.go:318] Caches are synced for service config
I0103 19:04:54.635304 1 shared_informer.go:318] Caches are synced for node config
==> kube-proxy [7690c1062587] <==
I0103 19:05:42.094224 1 server_others.go:69] "Using iptables proxy"
I0103 19:05:42.112536 1 node.go:141] Successfully retrieved node IP: 192.168.39.158
I0103 19:05:42.212358 1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
I0103 19:05:42.212405 1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I0103 19:05:42.215747 1 server_others.go:152] "Using iptables Proxier"
I0103 19:05:42.215809 1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I0103 19:05:42.215999 1 server.go:846] "Version info" version="v1.28.4"
I0103 19:05:42.216033 1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0103 19:05:42.217011 1 config.go:188] "Starting service config controller"
I0103 19:05:42.217065 1 shared_informer.go:311] Waiting for caches to sync for service config
I0103 19:05:42.217090 1 config.go:97] "Starting endpoint slice config controller"
I0103 19:05:42.217094 1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
I0103 19:05:42.217770 1 config.go:315] "Starting node config controller"
I0103 19:05:42.217777 1 shared_informer.go:311] Waiting for caches to sync for node config
I0103 19:05:42.318221 1 shared_informer.go:318] Caches are synced for node config
I0103 19:05:42.318267 1 shared_informer.go:318] Caches are synced for endpoint slice config
I0103 19:05:42.318265 1 shared_informer.go:318] Caches are synced for service config
==> kube-scheduler [02b14efe05c2] <==
I0103 19:05:37.601998 1 serving.go:348] Generated self-signed cert in-memory
W0103 19:05:39.815904 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0103 19:05:39.815947 1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0103 19:05:39.815957 1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
W0103 19:05:39.815964 1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0103 19:05:39.864891 1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
I0103 19:05:39.864936 1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0103 19:05:39.871031 1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
I0103 19:05:39.872777 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0103 19:05:39.872826 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0103 19:05:39.876163 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0103 19:05:39.972979 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kube-scheduler [b7763ea2ccc6] <==
I0103 19:04:53.006096 1 serving.go:348] Generated self-signed cert in-memory
W0103 19:04:54.293322 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0103 19:04:54.293688 1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0103 19:04:54.297830 1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
W0103 19:04:54.301518 1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0103 19:04:54.357117 1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
I0103 19:04:54.357798 1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0103 19:04:54.360001 1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
I0103 19:04:54.360299 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0103 19:04:54.361577 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0103 19:04:54.360385 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0103 19:04:54.462516 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0103 19:05:20.109247 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
E0103 19:05:20.109747 1 run.go:74] "command failed" err="finished without leader elect"
==> kubelet <==
-- Journal begins at Wed 2024-01-03 19:03:32 UTC, ends at Wed 2024-01-03 19:06:10 UTC. --
Jan 03 19:05:59 functional-380530 kubelet[7062]: E0103 19:05:59.875868 7062 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"Error response from daemon: pull access denied for nonexistingimage, repository does not exist or may require 'docker login': denied: requested access to the resource is denied\"" pod="default/invalid-svc" podUID="6a167bb9-a47d-470d-b687-1e859f8fce10"
Jan 03 19:06:00 functional-380530 kubelet[7062]: E0103 19:06:00.530639 7062 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nonexistingimage:latest\\\"\"" pod="default/invalid-svc" podUID="6a167bb9-a47d-470d-b687-1e859f8fce10"
Jan 03 19:06:01 functional-380530 kubelet[7062]: I0103 19:06:01.789195 7062 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96rwr\" (UniqueName: \"kubernetes.io/projected/6a167bb9-a47d-470d-b687-1e859f8fce10-kube-api-access-96rwr\") pod \"6a167bb9-a47d-470d-b687-1e859f8fce10\" (UID: \"6a167bb9-a47d-470d-b687-1e859f8fce10\") "
Jan 03 19:06:01 functional-380530 kubelet[7062]: I0103 19:06:01.791631 7062 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a167bb9-a47d-470d-b687-1e859f8fce10-kube-api-access-96rwr" (OuterVolumeSpecName: "kube-api-access-96rwr") pod "6a167bb9-a47d-470d-b687-1e859f8fce10" (UID: "6a167bb9-a47d-470d-b687-1e859f8fce10"). InnerVolumeSpecName "kube-api-access-96rwr". PluginName "kubernetes.io/projected", VolumeGidValue ""
Jan 03 19:06:01 functional-380530 kubelet[7062]: I0103 19:06:01.890637 7062 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-96rwr\" (UniqueName: \"kubernetes.io/projected/6a167bb9-a47d-470d-b687-1e859f8fce10-kube-api-access-96rwr\") on node \"functional-380530\" DevicePath \"\""
Jan 03 19:06:02 functional-380530 kubelet[7062]: I0103 19:06:02.718839 7062 topology_manager.go:215] "Topology Admit Handler" podUID="2eca0a99-fb8d-453d-82fd-47adc9f74c71" podNamespace="default" podName="hello-node-d7447cc7f-2wg4n"
Jan 03 19:06:02 functional-380530 kubelet[7062]: I0103 19:06:02.719025 7062 memory_manager.go:346] "RemoveStaleState removing state" podUID="abedb9418c0257f56bf50a5575bb67dc" containerName="kube-apiserver"
Jan 03 19:06:02 functional-380530 kubelet[7062]: I0103 19:06:02.798178 7062 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6whd\" (UniqueName: \"kubernetes.io/projected/2eca0a99-fb8d-453d-82fd-47adc9f74c71-kube-api-access-j6whd\") pod \"hello-node-d7447cc7f-2wg4n\" (UID: \"2eca0a99-fb8d-453d-82fd-47adc9f74c71\") " pod="default/hello-node-d7447cc7f-2wg4n"
Jan 03 19:06:02 functional-380530 kubelet[7062]: I0103 19:06:02.862261 7062 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="6a167bb9-a47d-470d-b687-1e859f8fce10" path="/var/lib/kubelet/pods/6a167bb9-a47d-470d-b687-1e859f8fce10/volumes"
Jan 03 19:06:03 functional-380530 kubelet[7062]: I0103 19:06:03.651516 7062 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58d0a061b969bca98eb1c9bac5ca9b660f984f246cddf474afba7cf4165cf6a7"
Jan 03 19:06:05 functional-380530 kubelet[7062]: I0103 19:06:05.390117 7062 topology_manager.go:215] "Topology Admit Handler" podUID="5003f8c8-32a6-481c-918a-a4b41206fda3" podNamespace="default" podName="busybox-mount"
Jan 03 19:06:05 functional-380530 kubelet[7062]: I0103 19:06:05.519854 7062 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qsnz\" (UniqueName: \"kubernetes.io/projected/5003f8c8-32a6-481c-918a-a4b41206fda3-kube-api-access-8qsnz\") pod \"busybox-mount\" (UID: \"5003f8c8-32a6-481c-918a-a4b41206fda3\") " pod="default/busybox-mount"
Jan 03 19:06:05 functional-380530 kubelet[7062]: I0103 19:06:05.519901 7062 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/5003f8c8-32a6-481c-918a-a4b41206fda3-test-volume\") pod \"busybox-mount\" (UID: \"5003f8c8-32a6-481c-918a-a4b41206fda3\") " pod="default/busybox-mount"
Jan 03 19:06:06 functional-380530 kubelet[7062]: I0103 19:06:06.927589 7062 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d4e713531401e92bd61da9e55a6388ebf5934dde8def7aed0331bb13cd99af1"
Jan 03 19:06:07 functional-380530 kubelet[7062]: I0103 19:06:07.479975 7062 topology_manager.go:215] "Topology Admit Handler" podUID="e6652af3-fc3b-46ec-8385-3a29381648a8" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-7fd5cb4ddc-xtcvf"
Jan 03 19:06:07 functional-380530 kubelet[7062]: I0103 19:06:07.480186 7062 topology_manager.go:215] "Topology Admit Handler" podUID="ca1efbab-ce4f-467d-b8f5-ecdcef709a51" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-pqbmt"
Jan 03 19:06:07 functional-380530 kubelet[7062]: I0103 19:06:07.541001 7062 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dphqz\" (UniqueName: \"kubernetes.io/projected/e6652af3-fc3b-46ec-8385-3a29381648a8-kube-api-access-dphqz\") pod \"dashboard-metrics-scraper-7fd5cb4ddc-xtcvf\" (UID: \"e6652af3-fc3b-46ec-8385-3a29381648a8\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc-xtcvf"
Jan 03 19:06:07 functional-380530 kubelet[7062]: I0103 19:06:07.541254 7062 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w844v\" (UniqueName: \"kubernetes.io/projected/ca1efbab-ce4f-467d-b8f5-ecdcef709a51-kube-api-access-w844v\") pod \"kubernetes-dashboard-8694d4445c-pqbmt\" (UID: \"ca1efbab-ce4f-467d-b8f5-ecdcef709a51\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-pqbmt"
Jan 03 19:06:07 functional-380530 kubelet[7062]: I0103 19:06:07.541378 7062 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e6652af3-fc3b-46ec-8385-3a29381648a8-tmp-volume\") pod \"dashboard-metrics-scraper-7fd5cb4ddc-xtcvf\" (UID: \"e6652af3-fc3b-46ec-8385-3a29381648a8\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc-xtcvf"
Jan 03 19:06:07 functional-380530 kubelet[7062]: I0103 19:06:07.541765 7062 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ca1efbab-ce4f-467d-b8f5-ecdcef709a51-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-pqbmt\" (UID: \"ca1efbab-ce4f-467d-b8f5-ecdcef709a51\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-pqbmt"
Jan 03 19:06:09 functional-380530 kubelet[7062]: I0103 19:06:09.331028 7062 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="920e3367b5e86f7c615816ed4c648dc89fe15d8f171042e1472d4874c7803e43"
Jan 03 19:06:09 functional-380530 kubelet[7062]: I0103 19:06:09.368032 7062 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77f5d10696b0cc08e6c0b7e1b002498a6ba71c2e52a1e6a89a9a0ef6fb2232e2"
Jan 03 19:06:09 functional-380530 kubelet[7062]: I0103 19:06:09.872675 7062 topology_manager.go:215] "Topology Admit Handler" podUID="982010e1-6f6d-4ff9-9742-7b1a28eb88be" podNamespace="default" podName="sp-pod"
Jan 03 19:06:09 functional-380530 kubelet[7062]: I0103 19:06:09.973862 7062 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv28z\" (UniqueName: \"kubernetes.io/projected/982010e1-6f6d-4ff9-9742-7b1a28eb88be-kube-api-access-qv28z\") pod \"sp-pod\" (UID: \"982010e1-6f6d-4ff9-9742-7b1a28eb88be\") " pod="default/sp-pod"
Jan 03 19:06:09 functional-380530 kubelet[7062]: I0103 19:06:09.973936 7062 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-87443ed1-90be-45e1-914d-bfe4b3175fa2\" (UniqueName: \"kubernetes.io/host-path/982010e1-6f6d-4ff9-9742-7b1a28eb88be-pvc-87443ed1-90be-45e1-914d-bfe4b3175fa2\") pod \"sp-pod\" (UID: \"982010e1-6f6d-4ff9-9742-7b1a28eb88be\") " pod="default/sp-pod"
==> storage-provisioner [40461b050c7f] <==
I0103 19:05:42.554693 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0103 19:05:42.585813 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0103 19:05:42.585870 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0103 19:05:59.992498 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0103 19:05:59.994616 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-380530_57d44123-0f73-4138-92e1-a41ca52ea394!
I0103 19:05:59.995267 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3ac5616d-22b9-4a72-8381-b4c346ff4ce8", APIVersion:"v1", ResourceVersion:"619", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-380530_57d44123-0f73-4138-92e1-a41ca52ea394 became leader
I0103 19:06:00.095781 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-380530_57d44123-0f73-4138-92e1-a41ca52ea394!
I0103 19:06:07.929316 1 controller.go:1332] provision "default/myclaim" class "standard": started
I0103 19:06:07.929537 1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard c368d9d6-8d3b-4e9b-bf9f-98ec19fa782e 390 0 2024-01-03 19:04:25 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
storageclass.kubernetes.io/is-default-class:true] [] [] [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-01-03 19:04:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-87443ed1-90be-45e1-914d-bfe4b3175fa2 &PersistentVolumeClaim{ObjectMeta:{myclaim default 87443ed1-90be-45e1-914d-bfe4b3175fa2 731 0 2024-01-03 19:06:07 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection] [{kube-controller-manager Update v1 2024-01-03 19:06:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-01-03 19:06:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
I0103 19:06:07.933541 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"87443ed1-90be-45e1-914d-bfe4b3175fa2", APIVersion:"v1", ResourceVersion:"731", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
I0103 19:06:07.942911 1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-87443ed1-90be-45e1-914d-bfe4b3175fa2" provisioned
I0103 19:06:07.943052 1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
I0103 19:06:07.943147 1 volume_store.go:212] Trying to save persistentvolume "pvc-87443ed1-90be-45e1-914d-bfe4b3175fa2"
I0103 19:06:07.986490 1 volume_store.go:219] persistentvolume "pvc-87443ed1-90be-45e1-914d-bfe4b3175fa2" saved
I0103 19:06:07.988325 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"87443ed1-90be-45e1-914d-bfe4b3175fa2", APIVersion:"v1", ResourceVersion:"731", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-87443ed1-90be-45e1-914d-bfe4b3175fa2
==> storage-provisioner [48a7dc9e3b76] <==
I0103 19:04:54.177995 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0103 19:04:54.367314 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0103 19:04:54.367659 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0103 19:05:11.797796 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0103 19:05:11.798169 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-380530_9923b615-d943-4e97-ab0d-7ff1bbd59351!
I0103 19:05:11.799042 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3ac5616d-22b9-4a72-8381-b4c346ff4ce8", APIVersion:"v1", ResourceVersion:"508", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-380530_9923b615-d943-4e97-ab0d-7ff1bbd59351 became leader
I0103 19:05:11.898575 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-380530_9923b615-d943-4e97-ab0d-7ff1bbd59351!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-380530 -n functional-380530
helpers_test.go:261: (dbg) Run: kubectl --context functional-380530 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount sp-pod dashboard-metrics-scraper-7fd5cb4ddc-xtcvf kubernetes-dashboard-8694d4445c-pqbmt
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context functional-380530 describe pod busybox-mount sp-pod dashboard-metrics-scraper-7fd5cb4ddc-xtcvf kubernetes-dashboard-8694d4445c-pqbmt
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-380530 describe pod busybox-mount sp-pod dashboard-metrics-scraper-7fd5cb4ddc-xtcvf kubernetes-dashboard-8694d4445c-pqbmt: exit status 1 (99.845727ms)
-- stdout --
Name: busybox-mount
Namespace: default
Priority: 0
Service Account: default
Node: functional-380530/192.168.39.158
Start Time: Wed, 03 Jan 2024 19:06:05 +0000
Labels: integration-test=busybox-mount
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Containers:
mount-munger:
Container ID:
Image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID:
Port: <none>
Host Port: <none>
Command:
/bin/sh
-c
--
Args:
cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/mount-9p from test-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8qsnz (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
test-volume:
Type: HostPath (bare host directory volume)
Path: /mount-9p
HostPathType:
kube-api-access-8qsnz:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5s default-scheduler Successfully assigned default/busybox-mount to functional-380530
Normal Pulling 3s kubelet Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Name: sp-pod
Namespace: default
Priority: 0
Service Account: default
Node: functional-380530/192.168.39.158
Start Time: Wed, 03 Jan 2024 19:06:09 +0000
Labels: test=storage-provisioner
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Containers:
myfrontend:
Container ID:
Image: docker.io/nginx
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qv28z (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
mypd:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: myclaim
ReadOnly: false
kube-api-access-qv28z:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 0s default-scheduler Successfully assigned default/sp-pod to functional-380530
-- /stdout --
** stderr **
Error from server (NotFound): pods "dashboard-metrics-scraper-7fd5cb4ddc-xtcvf" not found
Error from server (NotFound): pods "kubernetes-dashboard-8694d4445c-pqbmt" not found
** /stderr **
helpers_test.go:279: kubectl --context functional-380530 describe pod busybox-mount sp-pod dashboard-metrics-scraper-7fd5cb4ddc-xtcvf kubernetes-dashboard-8694d4445c-pqbmt: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (5.32s)