=== RUN TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd
=== CONT TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-252045 --alsologtostderr -v=1]
functional_test.go:914: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-252045 --alsologtostderr -v=1] ...
functional_test.go:906: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-252045 --alsologtostderr -v=1] stdout:
functional_test.go:906: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-252045 --alsologtostderr -v=1] stderr:
I0216 16:50:17.163281 20285 out.go:291] Setting OutFile to fd 1 ...
I0216 16:50:17.163703 20285 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0216 16:50:17.163730 20285 out.go:304] Setting ErrFile to fd 2...
I0216 16:50:17.163745 20285 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0216 16:50:17.164049 20285 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17936-7030/.minikube/bin
I0216 16:50:17.164443 20285 mustload.go:65] Loading cluster: functional-252045
I0216 16:50:17.165010 20285 config.go:182] Loaded profile config "functional-252045": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0216 16:50:17.165604 20285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0216 16:50:17.165689 20285 main.go:141] libmachine: Launching plugin server for driver kvm2
I0216 16:50:17.181965 20285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41205
I0216 16:50:17.182445 20285 main.go:141] libmachine: () Calling .GetVersion
I0216 16:50:17.183129 20285 main.go:141] libmachine: Using API Version 1
I0216 16:50:17.183155 20285 main.go:141] libmachine: () Calling .SetConfigRaw
I0216 16:50:17.183580 20285 main.go:141] libmachine: () Calling .GetMachineName
I0216 16:50:17.183808 20285 main.go:141] libmachine: (functional-252045) Calling .GetState
I0216 16:50:17.185703 20285 host.go:66] Checking if "functional-252045" exists ...
I0216 16:50:17.186000 20285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0216 16:50:17.186044 20285 main.go:141] libmachine: Launching plugin server for driver kvm2
I0216 16:50:17.206525 20285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39893
I0216 16:50:17.207079 20285 main.go:141] libmachine: () Calling .GetVersion
I0216 16:50:17.207587 20285 main.go:141] libmachine: Using API Version 1
I0216 16:50:17.207620 20285 main.go:141] libmachine: () Calling .SetConfigRaw
I0216 16:50:17.207997 20285 main.go:141] libmachine: () Calling .GetMachineName
I0216 16:50:17.208188 20285 main.go:141] libmachine: (functional-252045) Calling .DriverName
I0216 16:50:17.208376 20285 api_server.go:166] Checking apiserver status ...
I0216 16:50:17.208446 20285 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0216 16:50:17.208497 20285 main.go:141] libmachine: (functional-252045) Calling .GetSSHHostname
I0216 16:50:17.211927 20285 main.go:141] libmachine: (functional-252045) DBG | domain functional-252045 has defined MAC address 52:54:00:66:ee:c8 in network mk-functional-252045
I0216 16:50:17.212268 20285 main.go:141] libmachine: (functional-252045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:ee:c8", ip: ""} in network mk-functional-252045: {Iface:virbr1 ExpiryTime:2024-02-16 17:47:52 +0000 UTC Type:0 Mac:52:54:00:66:ee:c8 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:functional-252045 Clientid:01:52:54:00:66:ee:c8}
I0216 16:50:17.212299 20285 main.go:141] libmachine: (functional-252045) DBG | domain functional-252045 has defined IP address 192.168.39.243 and MAC address 52:54:00:66:ee:c8 in network mk-functional-252045
I0216 16:50:17.212592 20285 main.go:141] libmachine: (functional-252045) Calling .GetSSHPort
I0216 16:50:17.212809 20285 main.go:141] libmachine: (functional-252045) Calling .GetSSHKeyPath
I0216 16:50:17.212950 20285 main.go:141] libmachine: (functional-252045) Calling .GetSSHUsername
I0216 16:50:17.213106 20285 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17936-7030/.minikube/machines/functional-252045/id_rsa Username:docker}
I0216 16:50:17.342117 20285 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/7757/cgroup
I0216 16:50:17.374801 20285 api_server.go:182] apiserver freezer: "9:freezer:/kubepods/burstable/podc2b51b1fd880b34fa26ce379ca35c0cc/45c1123207aa7dbd424fa9da42a3ef571732967e1253ad1d41ad30ceddb1ddae"
I0216 16:50:17.374871 20285 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podc2b51b1fd880b34fa26ce379ca35c0cc/45c1123207aa7dbd424fa9da42a3ef571732967e1253ad1d41ad30ceddb1ddae/freezer.state
I0216 16:50:17.400538 20285 api_server.go:204] freezer state: "THAWED"
I0216 16:50:17.400564 20285 api_server.go:253] Checking apiserver healthz at https://192.168.39.243:8441/healthz ...
I0216 16:50:17.406114 20285 api_server.go:279] https://192.168.39.243:8441/healthz returned 200:
ok
W0216 16:50:17.406155 20285 out.go:239] * Enabling dashboard ...
* Enabling dashboard ...
I0216 16:50:17.406387 20285 config.go:182] Loaded profile config "functional-252045": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0216 16:50:17.406412 20285 addons.go:69] Setting dashboard=true in profile "functional-252045"
I0216 16:50:17.406421 20285 addons.go:234] Setting addon dashboard=true in "functional-252045"
I0216 16:50:17.406456 20285 host.go:66] Checking if "functional-252045" exists ...
I0216 16:50:17.406714 20285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0216 16:50:17.406753 20285 main.go:141] libmachine: Launching plugin server for driver kvm2
I0216 16:50:17.422395 20285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46849
I0216 16:50:17.422837 20285 main.go:141] libmachine: () Calling .GetVersion
I0216 16:50:17.423396 20285 main.go:141] libmachine: Using API Version 1
I0216 16:50:17.423423 20285 main.go:141] libmachine: () Calling .SetConfigRaw
I0216 16:50:17.423833 20285 main.go:141] libmachine: () Calling .GetMachineName
I0216 16:50:17.424445 20285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0216 16:50:17.424513 20285 main.go:141] libmachine: Launching plugin server for driver kvm2
I0216 16:50:17.440658 20285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34541
I0216 16:50:17.441144 20285 main.go:141] libmachine: () Calling .GetVersion
I0216 16:50:17.441625 20285 main.go:141] libmachine: Using API Version 1
I0216 16:50:17.441641 20285 main.go:141] libmachine: () Calling .SetConfigRaw
I0216 16:50:17.441970 20285 main.go:141] libmachine: () Calling .GetMachineName
I0216 16:50:17.442154 20285 main.go:141] libmachine: (functional-252045) Calling .GetState
I0216 16:50:17.443612 20285 main.go:141] libmachine: (functional-252045) Calling .DriverName
I0216 16:50:17.447172 20285 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0216 16:50:17.448821 20285 out.go:177] - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I0216 16:50:17.450304 20285 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0216 16:50:17.450327 20285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0216 16:50:17.450357 20285 main.go:141] libmachine: (functional-252045) Calling .GetSSHHostname
I0216 16:50:17.453762 20285 main.go:141] libmachine: (functional-252045) DBG | domain functional-252045 has defined MAC address 52:54:00:66:ee:c8 in network mk-functional-252045
I0216 16:50:17.454173 20285 main.go:141] libmachine: (functional-252045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:ee:c8", ip: ""} in network mk-functional-252045: {Iface:virbr1 ExpiryTime:2024-02-16 17:47:52 +0000 UTC Type:0 Mac:52:54:00:66:ee:c8 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:functional-252045 Clientid:01:52:54:00:66:ee:c8}
I0216 16:50:17.454215 20285 main.go:141] libmachine: (functional-252045) DBG | domain functional-252045 has defined IP address 192.168.39.243 and MAC address 52:54:00:66:ee:c8 in network mk-functional-252045
I0216 16:50:17.454387 20285 main.go:141] libmachine: (functional-252045) Calling .GetSSHPort
I0216 16:50:17.454571 20285 main.go:141] libmachine: (functional-252045) Calling .GetSSHKeyPath
I0216 16:50:17.454739 20285 main.go:141] libmachine: (functional-252045) Calling .GetSSHUsername
I0216 16:50:17.454868 20285 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17936-7030/.minikube/machines/functional-252045/id_rsa Username:docker}
I0216 16:50:17.617361 20285 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0216 16:50:17.617389 20285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0216 16:50:17.700839 20285 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0216 16:50:17.700872 20285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0216 16:50:17.767914 20285 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0216 16:50:17.767949 20285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0216 16:50:17.865422 20285 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0216 16:50:17.865445 20285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I0216 16:50:17.926828 20285 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
I0216 16:50:17.926849 20285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0216 16:50:17.961408 20285 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0216 16:50:17.961432 20285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0216 16:50:17.982146 20285 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0216 16:50:17.982170 20285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0216 16:50:18.000344 20285 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0216 16:50:18.000366 20285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0216 16:50:18.022525 20285 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0216 16:50:18.022555 20285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0216 16:50:18.039866 20285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0216 16:50:19.853912 20285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.813997388s)
I0216 16:50:19.853999 20285 main.go:141] libmachine: Making call to close driver server
I0216 16:50:19.854017 20285 main.go:141] libmachine: (functional-252045) Calling .Close
I0216 16:50:19.854331 20285 main.go:141] libmachine: Successfully made call to close driver server
I0216 16:50:19.854359 20285 main.go:141] libmachine: Making call to close connection to plugin binary
I0216 16:50:19.854360 20285 main.go:141] libmachine: (functional-252045) DBG | Closing plugin on server side
I0216 16:50:19.854369 20285 main.go:141] libmachine: Making call to close driver server
I0216 16:50:19.854381 20285 main.go:141] libmachine: (functional-252045) Calling .Close
I0216 16:50:19.854642 20285 main.go:141] libmachine: Successfully made call to close driver server
I0216 16:50:19.854657 20285 main.go:141] libmachine: Making call to close connection to plugin binary
I0216 16:50:19.856397 20285 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p functional-252045 addons enable metrics-server
I0216 16:50:19.857649 20285 addons.go:197] Writing out "functional-252045" config to set dashboard=true...
W0216 16:50:19.857891 20285 out.go:239] * Verifying dashboard health ...
* Verifying dashboard health ...
I0216 16:50:19.858549 20285 kapi.go:59] client config for functional-252045: &rest.Config{Host:"https://192.168.39.243:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17936-7030/.minikube/profiles/functional-252045/client.crt", KeyFile:"/home/jenkins/minikube-integration/17936-7030/.minikube/profiles/functional-252045/client.key", CAFile:"/home/jenkins/minikube-integration/17936-7030/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c29b00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0216 16:50:19.882047 20285 service.go:214] Found service: &Service{ObjectMeta:{kubernetes-dashboard kubernetes-dashboard feda0a11-207e-4652-a44a-ab26a2ec6373 728 0 2024-02-16 16:50:19 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2024-02-16 16:50:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.98.188.212,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.98.188.212],IPFamilies:[IPv4],AllocateLoadBalance
rNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W0216 16:50:19.882187 20285 out.go:239] * Launching proxy ...
* Launching proxy ...
I0216 16:50:19.882249 20285 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-252045 proxy --port 36195]
I0216 16:50:19.882530 20285 dashboard.go:157] Waiting for kubectl to output host:port ...
I0216 16:50:19.927010 20285 out.go:177]
W0216 16:50:19.928742 20285 out.go:239] X Exiting due to HOST_KUBECTL_PROXY: kubectl proxy: readByteWithTimeout: EOF
X Exiting due to HOST_KUBECTL_PROXY: kubectl proxy: readByteWithTimeout: EOF
W0216 16:50:19.928763 20285 out.go:239] *
*
W0216 16:50:19.930627 20285 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ * Please also attach the following file to the GitHub issue: │
│ * - /tmp/minikube_dashboard_2f9e80c8c4dc47927ad6915561a20c5705c3b3b4_0.log │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ * Please also attach the following file to the GitHub issue: │
│ * - /tmp/minikube_dashboard_2f9e80c8c4dc47927ad6915561a20c5705c3b3b4_0.log │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0216 16:50:19.932194 20285 out.go:177]
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p functional-252045 -n functional-252045
helpers_test.go:244: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p functional-252045 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-252045 logs -n 25: (1.971249127s)
helpers_test.go:252: TestFunctional/parallel/DashboardCmd logs:
-- stdout --
==> Audit <==
|-----------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|-----------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
| cache | delete | minikube | jenkins | v1.32.0 | 16 Feb 24 16:49 UTC | 16 Feb 24 16:49 UTC |
| | registry.k8s.io/pause:latest | | | | | |
| kubectl | functional-252045 kubectl -- | functional-252045 | jenkins | v1.32.0 | 16 Feb 24 16:49 UTC | 16 Feb 24 16:49 UTC |
| | --context functional-252045 | | | | | |
| | get pods | | | | | |
| start | -p functional-252045 | functional-252045 | jenkins | v1.32.0 | 16 Feb 24 16:49 UTC | 16 Feb 24 16:50 UTC |
| | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision | | | | | |
| | --wait=all | | | | | |
| service | invalid-svc -p | functional-252045 | jenkins | v1.32.0 | 16 Feb 24 16:50 UTC | |
| | functional-252045 | | | | | |
| start | -p functional-252045 | functional-252045 | jenkins | v1.32.0 | 16 Feb 24 16:50 UTC | |
| | --dry-run --memory | | | | | |
| | 250MB --alsologtostderr | | | | | |
| | --driver=kvm2 | | | | | |
| cp | functional-252045 cp | functional-252045 | jenkins | v1.32.0 | 16 Feb 24 16:50 UTC | 16 Feb 24 16:50 UTC |
| | testdata/cp-test.txt | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| config | functional-252045 config unset | functional-252045 | jenkins | v1.32.0 | 16 Feb 24 16:50 UTC | 16 Feb 24 16:50 UTC |
| | cpus | | | | | |
| config | functional-252045 config get | functional-252045 | jenkins | v1.32.0 | 16 Feb 24 16:50 UTC | |
| | cpus | | | | | |
| config | functional-252045 config set | functional-252045 | jenkins | v1.32.0 | 16 Feb 24 16:50 UTC | 16 Feb 24 16:50 UTC |
| | cpus 2 | | | | | |
| config | functional-252045 config get | functional-252045 | jenkins | v1.32.0 | 16 Feb 24 16:50 UTC | 16 Feb 24 16:50 UTC |
| | cpus | | | | | |
| ssh | functional-252045 ssh -n | functional-252045 | jenkins | v1.32.0 | 16 Feb 24 16:50 UTC | 16 Feb 24 16:50 UTC |
| | functional-252045 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| config | functional-252045 config unset | functional-252045 | jenkins | v1.32.0 | 16 Feb 24 16:50 UTC | 16 Feb 24 16:50 UTC |
| | cpus | | | | | |
| config | functional-252045 config get | functional-252045 | jenkins | v1.32.0 | 16 Feb 24 16:50 UTC | |
| | cpus | | | | | |
| start | -p functional-252045 | functional-252045 | jenkins | v1.32.0 | 16 Feb 24 16:50 UTC | |
| | --dry-run --memory | | | | | |
| | 250MB --alsologtostderr | | | | | |
| | --driver=kvm2 | | | | | |
| cp | functional-252045 cp | functional-252045 | jenkins | v1.32.0 | 16 Feb 24 16:50 UTC | 16 Feb 24 16:50 UTC |
| | functional-252045:/home/docker/cp-test.txt | | | | | |
| | /tmp/TestFunctionalparallelCpCmd1822476358/001/cp-test.txt | | | | | |
| start | -p functional-252045 --dry-run | functional-252045 | jenkins | v1.32.0 | 16 Feb 24 16:50 UTC | |
| | --alsologtostderr -v=1 | | | | | |
| | --driver=kvm2 | | | | | |
| ssh | functional-252045 ssh -n | functional-252045 | jenkins | v1.32.0 | 16 Feb 24 16:50 UTC | 16 Feb 24 16:50 UTC |
| | functional-252045 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | functional-252045 cp | functional-252045 | jenkins | v1.32.0 | 16 Feb 24 16:50 UTC | 16 Feb 24 16:50 UTC |
| | testdata/cp-test.txt | | | | | |
| | /tmp/does/not/exist/cp-test.txt | | | | | |
| dashboard | --url --port 36195 | functional-252045 | jenkins | v1.32.0 | 16 Feb 24 16:50 UTC | |
| | -p functional-252045 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| ssh | functional-252045 ssh -n | functional-252045 | jenkins | v1.32.0 | 16 Feb 24 16:50 UTC | 16 Feb 24 16:50 UTC |
| | functional-252045 sudo cat | | | | | |
| | /tmp/does/not/exist/cp-test.txt | | | | | |
| mount | -p functional-252045 | functional-252045 | jenkins | v1.32.0 | 16 Feb 24 16:50 UTC | |
| | /tmp/TestFunctionalparallelMountCmdany-port2762247869/001:/mount-9p | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| ssh | functional-252045 ssh findmnt | functional-252045 | jenkins | v1.32.0 | 16 Feb 24 16:50 UTC | |
| | -T /mount-9p | grep 9p | | | | | |
| ssh | functional-252045 ssh findmnt | functional-252045 | jenkins | v1.32.0 | 16 Feb 24 16:50 UTC | 16 Feb 24 16:50 UTC |
| | -T /mount-9p | grep 9p | | | | | |
| ssh | functional-252045 ssh -- ls | functional-252045 | jenkins | v1.32.0 | 16 Feb 24 16:50 UTC | 16 Feb 24 16:50 UTC |
| | -la /mount-9p | | | | | |
| ssh | functional-252045 ssh cat | functional-252045 | jenkins | v1.32.0 | 16 Feb 24 16:50 UTC | 16 Feb 24 16:50 UTC |
| | /mount-9p/test-1708102217483552997 | | | | | |
|-----------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/02/16 16:50:16
Running on machine: ubuntu-20-agent-7
Binary: Built with gc go1.21.6 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0216 16:50:16.633887 20116 out.go:291] Setting OutFile to fd 1 ...
I0216 16:50:16.634160 20116 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0216 16:50:16.634169 20116 out.go:304] Setting ErrFile to fd 2...
I0216 16:50:16.634458 20116 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0216 16:50:16.634871 20116 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17936-7030/.minikube/bin
I0216 16:50:16.635707 20116 out.go:298] Setting JSON to false
I0216 16:50:16.637162 20116 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":1967,"bootTime":1708100250,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0216 16:50:16.637237 20116 start.go:139] virtualization: kvm guest
I0216 16:50:16.639163 20116 out.go:177] * [functional-252045] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
I0216 16:50:16.640870 20116 out.go:177] - MINIKUBE_LOCATION=17936
I0216 16:50:16.640877 20116 notify.go:220] Checking for updates...
I0216 16:50:16.642189 20116 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0216 16:50:16.643711 20116 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/17936-7030/kubeconfig
I0216 16:50:16.645083 20116 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/17936-7030/.minikube
I0216 16:50:16.646455 20116 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0216 16:50:16.648057 20116 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0216 16:50:16.649952 20116 config.go:182] Loaded profile config "functional-252045": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0216 16:50:16.650366 20116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0216 16:50:16.650421 20116 main.go:141] libmachine: Launching plugin server for driver kvm2
I0216 16:50:16.665590 20116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35099
I0216 16:50:16.665951 20116 main.go:141] libmachine: () Calling .GetVersion
I0216 16:50:16.666472 20116 main.go:141] libmachine: Using API Version 1
I0216 16:50:16.666493 20116 main.go:141] libmachine: () Calling .SetConfigRaw
I0216 16:50:16.666828 20116 main.go:141] libmachine: () Calling .GetMachineName
I0216 16:50:16.666983 20116 main.go:141] libmachine: (functional-252045) Calling .DriverName
I0216 16:50:16.667192 20116 driver.go:392] Setting default libvirt URI to qemu:///system
I0216 16:50:16.667479 20116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0216 16:50:16.667513 20116 main.go:141] libmachine: Launching plugin server for driver kvm2
I0216 16:50:16.682119 20116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46093
I0216 16:50:16.682534 20116 main.go:141] libmachine: () Calling .GetVersion
I0216 16:50:16.683035 20116 main.go:141] libmachine: Using API Version 1
I0216 16:50:16.683062 20116 main.go:141] libmachine: () Calling .SetConfigRaw
I0216 16:50:16.683428 20116 main.go:141] libmachine: () Calling .GetMachineName
I0216 16:50:16.683623 20116 main.go:141] libmachine: (functional-252045) Calling .DriverName
I0216 16:50:16.724287 20116 out.go:177] * Using the kvm2 driver based on existing profile
I0216 16:50:16.725793 20116 start.go:299] selected driver: kvm2
I0216 16:50:16.725817 20116 start.go:903] validating driver "kvm2" against &{Name:functional-252045 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17936/minikube-v1.32.1-1708020063-17936-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-252045 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.243 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 C
ertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0216 16:50:16.725975 20116 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0216 16:50:16.727361 20116 cni.go:84] Creating CNI manager for ""
I0216 16:50:16.727393 20116 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0216 16:50:16.727411 20116 start_flags.go:323] config:
{Name:functional-252045 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17936/minikube-v1.32.1-1708020063-17936-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-252045 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.243 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0216 16:50:16.729294 20116 out.go:177] * dry-run validation complete!
==> Docker <==
-- Journal begins at Fri 2024-02-16 16:47:48 UTC, ends at Fri 2024-02-16 16:50:21 UTC. --
Feb 16 16:50:12 functional-252045 cri-dockerd[7074]: time="2024-02-16T16:50:12Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0f2e5bcd5ff4649089dab0c64bf20a9fc14e05269a61dcacd199e361a84d6b6b/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
Feb 16 16:50:13 functional-252045 dockerd[6834]: time="2024-02-16T16:50:13.350177409Z" level=error msg="Not continuing with pull after error: errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n"
Feb 16 16:50:13 functional-252045 dockerd[6834]: time="2024-02-16T16:50:13.350728010Z" level=info msg="Ignoring extra error returned from registry" error="unauthorized: authentication required"
Feb 16 16:50:15 functional-252045 dockerd[6834]: time="2024-02-16T16:50:15.008399708Z" level=info msg="ignoring event" container=0f2e5bcd5ff4649089dab0c64bf20a9fc14e05269a61dcacd199e361a84d6b6b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 16 16:50:15 functional-252045 dockerd[6840]: time="2024-02-16T16:50:15.009124040Z" level=info msg="shim disconnected" id=0f2e5bcd5ff4649089dab0c64bf20a9fc14e05269a61dcacd199e361a84d6b6b namespace=moby
Feb 16 16:50:15 functional-252045 dockerd[6840]: time="2024-02-16T16:50:15.009175371Z" level=warning msg="cleaning up after shim disconnected" id=0f2e5bcd5ff4649089dab0c64bf20a9fc14e05269a61dcacd199e361a84d6b6b namespace=moby
Feb 16 16:50:15 functional-252045 dockerd[6840]: time="2024-02-16T16:50:15.009183860Z" level=info msg="cleaning up dead shim" namespace=moby
Feb 16 16:50:17 functional-252045 dockerd[6840]: time="2024-02-16T16:50:17.408269029Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 16 16:50:17 functional-252045 dockerd[6840]: time="2024-02-16T16:50:17.408788193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 16 16:50:17 functional-252045 dockerd[6840]: time="2024-02-16T16:50:17.408868838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 16 16:50:17 functional-252045 dockerd[6840]: time="2024-02-16T16:50:17.408965021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 16 16:50:18 functional-252045 cri-dockerd[7074]: time="2024-02-16T16:50:18Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b2135eb44f3adf9b306c40b5bd04eb4d61ddf296a74fa77e53bb7d6d16e9be80/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
Feb 16 16:50:19 functional-252045 dockerd[6840]: time="2024-02-16T16:50:19.927444073Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 16 16:50:19 functional-252045 dockerd[6840]: time="2024-02-16T16:50:19.927540562Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 16 16:50:19 functional-252045 dockerd[6840]: time="2024-02-16T16:50:19.927573663Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 16 16:50:19 functional-252045 dockerd[6840]: time="2024-02-16T16:50:19.927640823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 16 16:50:20 functional-252045 dockerd[6840]: time="2024-02-16T16:50:20.495380755Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 16 16:50:20 functional-252045 dockerd[6840]: time="2024-02-16T16:50:20.496784975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 16 16:50:20 functional-252045 dockerd[6840]: time="2024-02-16T16:50:20.496811829Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 16 16:50:20 functional-252045 dockerd[6840]: time="2024-02-16T16:50:20.496868063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 16 16:50:20 functional-252045 dockerd[6840]: time="2024-02-16T16:50:20.517805916Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 16 16:50:20 functional-252045 dockerd[6840]: time="2024-02-16T16:50:20.518225205Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 16 16:50:20 functional-252045 dockerd[6840]: time="2024-02-16T16:50:20.518376632Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 16 16:50:20 functional-252045 dockerd[6840]: time="2024-02-16T16:50:20.518933819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 16 16:50:20 functional-252045 cri-dockerd[7074]: time="2024-02-16T16:50:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8b71d5fa00e1133e4ec46cba6af194309f0afa9ec5a0cc095a8edb70be453a28/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
0e3511fe4dd8f 6e38f40d628db 10 seconds ago Running storage-provisioner 3 8abe6dda762ca storage-provisioner
f239d9641407c ead0a4a53df89 27 seconds ago Running coredns 2 5e6bc0f6648b5 coredns-5dd5756b68-klqcl
a7450fd655d3c 6e38f40d628db 27 seconds ago Exited storage-provisioner 2 8abe6dda762ca storage-provisioner
ad1c9395696e8 83f6cc407eed8 28 seconds ago Running kube-proxy 2 5451e2aa6b185 kube-proxy-bfbpf
f88136ecdec0c e3db313c6dbc0 32 seconds ago Running kube-scheduler 2 caafbf15d8fcb kube-scheduler-functional-252045
0a931c9b0efd5 73deb9a3f7025 32 seconds ago Running etcd 2 94c5e2b1b64ab etcd-functional-252045
a39e421a9bee5 d058aa5ab969c 33 seconds ago Running kube-controller-manager 2 31bc0728e866e kube-controller-manager-functional-252045
45c1123207aa7 7fe0e6f37db33 33 seconds ago Running kube-apiserver 0 3223254a2625f kube-apiserver-functional-252045
9bfbe201ac3bd ead0a4a53df89 About a minute ago Exited coredns 1 c0dec6771b2f2 coredns-5dd5756b68-klqcl
297dfce7d8a9a d058aa5ab969c About a minute ago Exited kube-controller-manager 1 5dfe5d620a31e kube-controller-manager-functional-252045
d987ccd91104e 73deb9a3f7025 About a minute ago Exited etcd 1 e5d0d991ee8d1 etcd-functional-252045
7866bec9824be 83f6cc407eed8 About a minute ago Exited kube-proxy 1 481ad3974f1dd kube-proxy-bfbpf
c51a4f9114daf e3db313c6dbc0 About a minute ago Exited kube-scheduler 1 8223e2b57274c kube-scheduler-functional-252045
62c09e2f3cf75 7fe0e6f37db33 About a minute ago Exited kube-apiserver 1 af7416473f652 kube-apiserver-functional-252045
==> coredns [9bfbe201ac3b] <==
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
.:53
[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
CoreDNS-1.10.1
linux/amd64, go1.20, 055b2c3
[INFO] 127.0.0.1:45269 - 37825 "HINFO IN 1838643386725229726.1098123538695082489. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021231672s
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
==> coredns [f239d9641407] <==
.:53
[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
CoreDNS-1.10.1
linux/amd64, go1.20, 055b2c3
[INFO] 127.0.0.1:39014 - 1431 "HINFO IN 6120982926786768696.8405161135744713568. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.044532626s
==> describe nodes <==
Name: functional-252045
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=functional-252045
kubernetes.io/os=linux
minikube.k8s.io/commit=fdce3bf7146356e37c4eabb07ae105993e4520f9
minikube.k8s.io/name=functional-252045
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_02_16T16_48_25_0700
minikube.k8s.io/version=v1.32.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Fri, 16 Feb 2024 16:48:22 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: functional-252045
AcquireTime: <unset>
RenewTime: Fri, 16 Feb 2024 16:50:12 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Fri, 16 Feb 2024 16:49:52 +0000 Fri, 16 Feb 2024 16:48:20 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Fri, 16 Feb 2024 16:49:52 +0000 Fri, 16 Feb 2024 16:48:20 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Fri, 16 Feb 2024 16:49:52 +0000 Fri, 16 Feb 2024 16:48:20 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Fri, 16 Feb 2024 16:49:52 +0000 Fri, 16 Feb 2024 16:48:30 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.243
Hostname: functional-252045
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 3914496Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 3914496Ki
pods: 110
System Info:
Machine ID: 7fbd3523608c484cb8c9242b6b6c33b5
System UUID: 7fbd3523-608c-484c-b8c9-242b6b6c33b5
Boot ID: bd6637c6-dc61-4517-84a7-609d8738a75b
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://24.0.7
Kubelet Version: v1.28.4
Kube-Proxy Version: v1.28.4
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (11 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox-mount 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2s
default hello-node-d7447cc7f-zx5gj 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5s
kube-system coredns-5dd5756b68-klqcl 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (1%!)(MISSING) 170Mi (4%!)(MISSING) 104s
kube-system etcd-functional-252045 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (2%!)(MISSING) 0 (0%!)(MISSING) 115s
kube-system kube-apiserver-functional-252045 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 27s
kube-system kube-controller-manager-functional-252045 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 115s
kube-system kube-proxy-bfbpf 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 103s
kube-system kube-scheduler-functional-252045 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 115s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 102s
kubernetes-dashboard dashboard-metrics-scraper-7fd5cb4ddc-42pt5 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2s
kubernetes-dashboard kubernetes-dashboard-8694d4445c-wtrmv 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (37%!)(MISSING) 0 (0%!)(MISSING)
memory 170Mi (4%!)(MISSING) 170Mi (4%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 102s kube-proxy
Normal Starting 27s kube-proxy
Normal Starting 72s kube-proxy
Normal NodeHasSufficientMemory 2m4s (x8 over 2m4s) kubelet Node functional-252045 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 2m4s (x8 over 2m4s) kubelet Node functional-252045 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2m4s (x7 over 2m4s) kubelet Node functional-252045 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 2m4s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientPID 116s kubelet Node functional-252045 status is now: NodeHasSufficientPID
Normal NodeHasSufficientMemory 116s kubelet Node functional-252045 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 116s kubelet Node functional-252045 status is now: NodeHasNoDiskPressure
Normal Starting 116s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 115s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 111s kubelet Node functional-252045 status is now: NodeReady
Normal RegisteredNode 104s node-controller Node functional-252045 event: Registered Node functional-252045 in Controller
Normal RegisteredNode 60s node-controller Node functional-252045 event: Registered Node functional-252045 in Controller
Normal Starting 35s kubelet Starting kubelet.
Normal NodeHasNoDiskPressure 34s (x8 over 34s) kubelet Node functional-252045 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientMemory 34s (x8 over 34s) kubelet Node functional-252045 status is now: NodeHasSufficientMemory
Normal NodeHasSufficientPID 34s (x7 over 34s) kubelet Node functional-252045 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 34s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 16s node-controller Node functional-252045 event: Registered Node functional-252045 in Controller
==> dmesg <==
[ +2.311389] kauditd_printk_skb: 53 callbacks suppressed
[ +4.395937] systemd-fstab-generator[1497]: Ignoring "noauto" for root device
[ +0.603814] kauditd_printk_skb: 38 callbacks suppressed
[ +7.686249] systemd-fstab-generator[2438]: Ignoring "noauto" for root device
[ +19.753682] systemd-fstab-generator[3873]: Ignoring "noauto" for root device
[ +0.341875] systemd-fstab-generator[3907]: Ignoring "noauto" for root device
[ +0.158386] systemd-fstab-generator[3918]: Ignoring "noauto" for root device
[ +0.171651] systemd-fstab-generator[3941]: Ignoring "noauto" for root device
[ +5.173539] kauditd_printk_skb: 23 callbacks suppressed
[ +6.832890] systemd-fstab-generator[4535]: Ignoring "noauto" for root device
[ +0.129641] systemd-fstab-generator[4546]: Ignoring "noauto" for root device
[ +0.121367] systemd-fstab-generator[4557]: Ignoring "noauto" for root device
[ +0.149557] systemd-fstab-generator[4572]: Ignoring "noauto" for root device
[Feb16 16:49] kauditd_printk_skb: 29 callbacks suppressed
[ +26.348872] systemd-fstab-generator[6346]: Ignoring "noauto" for root device
[ +0.313099] systemd-fstab-generator[6380]: Ignoring "noauto" for root device
[ +0.165294] systemd-fstab-generator[6391]: Ignoring "noauto" for root device
[ +0.181825] systemd-fstab-generator[6404]: Ignoring "noauto" for root device
[ +11.904156] systemd-fstab-generator[7023]: Ignoring "noauto" for root device
[ +0.121615] systemd-fstab-generator[7034]: Ignoring "noauto" for root device
[ +0.118196] systemd-fstab-generator[7045]: Ignoring "noauto" for root device
[ +0.148343] systemd-fstab-generator[7060]: Ignoring "noauto" for root device
[ +2.307423] systemd-fstab-generator[7325]: Ignoring "noauto" for root device
[ +7.992240] kauditd_printk_skb: 29 callbacks suppressed
[Feb16 16:50] kauditd_printk_skb: 11 callbacks suppressed
==> etcd [0a931c9b0efd] <==
{"level":"info","ts":"2024-02-16T16:49:49.382924Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
{"level":"info","ts":"2024-02-16T16:49:49.383119Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 switched to configuration voters=(5579817544954101747)"}
{"level":"info","ts":"2024-02-16T16:49:49.383202Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"c7dcc22c4a571085","local-member-id":"4d6f7e7e767b3ff3","added-peer-id":"4d6f7e7e767b3ff3","added-peer-peer-urls":["https://192.168.39.243:2380"]}
{"level":"info","ts":"2024-02-16T16:49:49.383298Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c7dcc22c4a571085","local-member-id":"4d6f7e7e767b3ff3","cluster-version":"3.5"}
{"level":"info","ts":"2024-02-16T16:49:49.383322Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2024-02-16T16:49:49.388881Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2024-02-16T16:49:49.38905Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.243:2380"}
{"level":"info","ts":"2024-02-16T16:49:49.389109Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.243:2380"}
{"level":"info","ts":"2024-02-16T16:49:49.389878Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"4d6f7e7e767b3ff3","initial-advertise-peer-urls":["https://192.168.39.243:2380"],"listen-peer-urls":["https://192.168.39.243:2380"],"advertise-client-urls":["https://192.168.39.243:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.243:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2024-02-16T16:49:49.389957Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2024-02-16T16:49:50.365961Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 is starting a new election at term 3"}
{"level":"info","ts":"2024-02-16T16:49:50.366022Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 became pre-candidate at term 3"}
{"level":"info","ts":"2024-02-16T16:49:50.36604Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 received MsgPreVoteResp from 4d6f7e7e767b3ff3 at term 3"}
{"level":"info","ts":"2024-02-16T16:49:50.366051Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 became candidate at term 4"}
{"level":"info","ts":"2024-02-16T16:49:50.366057Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 received MsgVoteResp from 4d6f7e7e767b3ff3 at term 4"}
{"level":"info","ts":"2024-02-16T16:49:50.366107Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 became leader at term 4"}
{"level":"info","ts":"2024-02-16T16:49:50.366118Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4d6f7e7e767b3ff3 elected leader 4d6f7e7e767b3ff3 at term 4"}
{"level":"info","ts":"2024-02-16T16:49:50.372913Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"4d6f7e7e767b3ff3","local-member-attributes":"{Name:functional-252045 ClientURLs:[https://192.168.39.243:2379]}","request-path":"/0/members/4d6f7e7e767b3ff3/attributes","cluster-id":"c7dcc22c4a571085","publish-timeout":"7s"}
{"level":"info","ts":"2024-02-16T16:49:50.373099Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-02-16T16:49:50.374333Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2024-02-16T16:49:50.374659Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-02-16T16:49:50.375792Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.243:2379"}
{"level":"info","ts":"2024-02-16T16:49:50.376826Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2024-02-16T16:49:50.376866Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2024-02-16T16:50:19.634241Z","caller":"traceutil/trace.go:171","msg":"trace[1446042097] transaction","detail":"{read_only:false; response_revision:726; number_of_response:1; }","duration":"108.714792ms","start":"2024-02-16T16:50:19.525511Z","end":"2024-02-16T16:50:19.634226Z","steps":["trace[1446042097] 'process raft request' (duration: 99.572394ms)"],"step_count":1}
==> etcd [d987ccd91104] <==
{"level":"info","ts":"2024-02-16T16:49:05.802083Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2024-02-16T16:49:07.67256Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 is starting a new election at term 2"}
{"level":"info","ts":"2024-02-16T16:49:07.672684Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 became pre-candidate at term 2"}
{"level":"info","ts":"2024-02-16T16:49:07.672702Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 received MsgPreVoteResp from 4d6f7e7e767b3ff3 at term 2"}
{"level":"info","ts":"2024-02-16T16:49:07.672714Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 became candidate at term 3"}
{"level":"info","ts":"2024-02-16T16:49:07.672719Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 received MsgVoteResp from 4d6f7e7e767b3ff3 at term 3"}
{"level":"info","ts":"2024-02-16T16:49:07.672727Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 became leader at term 3"}
{"level":"info","ts":"2024-02-16T16:49:07.672736Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4d6f7e7e767b3ff3 elected leader 4d6f7e7e767b3ff3 at term 3"}
{"level":"info","ts":"2024-02-16T16:49:07.677273Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-02-16T16:49:07.678331Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.243:2379"}
{"level":"info","ts":"2024-02-16T16:49:07.678732Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-02-16T16:49:07.679479Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2024-02-16T16:49:07.677222Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"4d6f7e7e767b3ff3","local-member-attributes":"{Name:functional-252045 ClientURLs:[https://192.168.39.243:2379]}","request-path":"/0/members/4d6f7e7e767b3ff3/attributes","cluster-id":"c7dcc22c4a571085","publish-timeout":"7s"}
{"level":"info","ts":"2024-02-16T16:49:07.69481Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2024-02-16T16:49:07.694848Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2024-02-16T16:49:32.305917Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2024-02-16T16:49:32.305996Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"functional-252045","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.243:2380"],"advertise-client-urls":["https://192.168.39.243:2379"]}
{"level":"warn","ts":"2024-02-16T16:49:32.306088Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.243:2379: use of closed network connection"}
{"level":"warn","ts":"2024-02-16T16:49:32.306114Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.243:2379: use of closed network connection"}
{"level":"warn","ts":"2024-02-16T16:49:32.306168Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"warn","ts":"2024-02-16T16:49:32.306299Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"info","ts":"2024-02-16T16:49:32.339281Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"4d6f7e7e767b3ff3","current-leader-member-id":"4d6f7e7e767b3ff3"}
{"level":"info","ts":"2024-02-16T16:49:32.343797Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.243:2380"}
{"level":"info","ts":"2024-02-16T16:49:32.34406Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.243:2380"}
{"level":"info","ts":"2024-02-16T16:49:32.344185Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"functional-252045","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.243:2380"],"advertise-client-urls":["https://192.168.39.243:2379"]}
==> kernel <==
16:50:21 up 2 min, 0 users, load average: 3.18, 1.47, 0.56
Linux functional-252045 5.10.57 #1 SMP Thu Feb 15 22:26:06 UTC 2024 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2021.02.12"
==> kube-apiserver [45c1123207aa] <==
I0216 16:49:52.143815 1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
I0216 16:49:52.143866 1 apf_controller.go:377] Running API Priority and Fairness config worker
I0216 16:49:52.143873 1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
I0216 16:49:52.145914 1 shared_informer.go:318] Caches are synced for crd-autoregister
I0216 16:49:52.145937 1 aggregator.go:166] initial CRD sync complete...
I0216 16:49:52.145942 1 autoregister_controller.go:141] Starting autoregister controller
I0216 16:49:52.145947 1 cache.go:32] Waiting for caches to sync for autoregister controller
I0216 16:49:52.145952 1 cache.go:39] Caches are synced for autoregister controller
I0216 16:49:52.149190 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0216 16:49:52.149635 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0216 16:49:52.941301 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
W0216 16:49:53.355830 1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.243]
I0216 16:49:53.357146 1 controller.go:624] quota admission added evaluator for: endpoints
I0216 16:49:53.362825 1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0216 16:49:53.882006 1 controller.go:624] quota admission added evaluator for: serviceaccounts
I0216 16:49:53.921850 1 controller.go:624] quota admission added evaluator for: deployments.apps
I0216 16:49:54.029081 1 controller.go:624] quota admission added evaluator for: daemonsets.apps
I0216 16:49:54.144912 1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0216 16:49:54.168127 1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0216 16:50:11.502687 1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.106.110.227"}
I0216 16:50:16.013498 1 controller.go:624] quota admission added evaluator for: replicasets.apps
I0216 16:50:16.169037 1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.100.216.225"}
I0216 16:50:19.058796 1 controller.go:624] quota admission added evaluator for: namespaces
I0216 16:50:19.643387 1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.188.212"}
I0216 16:50:19.842240 1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.166.97"}
==> kube-apiserver [62c09e2f3cf7] <==
W0216 16:49:41.624506 1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0216 16:49:41.643443 1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0216 16:49:41.661282 1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0216 16:49:41.696470 1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0216 16:49:41.733085 1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0216 16:49:41.757450 1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0216 16:49:41.806490 1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0216 16:49:41.815959 1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0216 16:49:41.822801 1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0216 16:49:41.824317 1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0216 16:49:41.843676 1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0216 16:49:41.853932 1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0216 16:49:41.893918 1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0216 16:49:41.972529 1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0216 16:49:42.028382 1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0216 16:49:42.049891 1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0216 16:49:42.052333 1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0216 16:49:42.052748 1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0216 16:49:42.060879 1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0216 16:49:42.068874 1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0216 16:49:42.070212 1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0216 16:49:42.092092 1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0216 16:49:42.148895 1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0216 16:49:42.171810 1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0216 16:49:42.199995 1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
==> kube-controller-manager [297dfce7d8a9] <==
I0216 16:49:21.636759 1 shared_informer.go:318] Caches are synced for expand
I0216 16:49:21.639159 1 shared_informer.go:318] Caches are synced for node
I0216 16:49:21.639355 1 range_allocator.go:174] "Sending events to api server"
I0216 16:49:21.639522 1 range_allocator.go:178] "Starting range CIDR allocator"
I0216 16:49:21.639631 1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
I0216 16:49:21.639684 1 shared_informer.go:318] Caches are synced for cidrallocator
I0216 16:49:21.642790 1 shared_informer.go:318] Caches are synced for ephemeral
I0216 16:49:21.644258 1 shared_informer.go:318] Caches are synced for namespace
I0216 16:49:21.644676 1 shared_informer.go:318] Caches are synced for endpoint
I0216 16:49:21.651493 1 shared_informer.go:318] Caches are synced for service account
I0216 16:49:21.654059 1 shared_informer.go:318] Caches are synced for HPA
I0216 16:49:21.656428 1 shared_informer.go:318] Caches are synced for crt configmap
I0216 16:49:21.731880 1 shared_informer.go:318] Caches are synced for cronjob
I0216 16:49:21.739303 1 shared_informer.go:318] Caches are synced for resource quota
I0216 16:49:21.763675 1 shared_informer.go:318] Caches are synced for certificate-csrapproving
I0216 16:49:21.770078 1 shared_informer.go:318] Caches are synced for resource quota
I0216 16:49:21.770329 1 shared_informer.go:318] Caches are synced for TTL after finished
I0216 16:49:21.773910 1 shared_informer.go:318] Caches are synced for job
I0216 16:49:21.781921 1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
I0216 16:49:21.782207 1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
I0216 16:49:21.783491 1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
I0216 16:49:21.786500 1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
I0216 16:49:22.176046 1 shared_informer.go:318] Caches are synced for garbage collector
I0216 16:49:22.193554 1 shared_informer.go:318] Caches are synced for garbage collector
I0216 16:49:22.193640 1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
==> kube-controller-manager [a39e421a9bee] <==
I0216 16:50:19.340960 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="20.957268ms"
E0216 16:50:19.340982 1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-8694d4445c" failed with pods "kubernetes-dashboard-8694d4445c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0216 16:50:19.341815 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="18.270532ms"
E0216 16:50:19.341827 1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" failed with pods "dashboard-metrics-scraper-7fd5cb4ddc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0216 16:50:19.341857 1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8694d4445c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
I0216 16:50:19.341867 1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-7fd5cb4ddc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
I0216 16:50:19.352090 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="11.007964ms"
E0216 16:50:19.354277 1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-8694d4445c" failed with pods "kubernetes-dashboard-8694d4445c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0216 16:50:19.356138 1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8694d4445c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
I0216 16:50:19.365804 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="11.387005ms"
E0216 16:50:19.365851 1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-8694d4445c" failed with pods "kubernetes-dashboard-8694d4445c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0216 16:50:19.365891 1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8694d4445c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
I0216 16:50:19.377218 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="11.374008ms"
E0216 16:50:19.377303 1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" failed with pods "dashboard-metrics-scraper-7fd5cb4ddc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0216 16:50:19.377346 1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-7fd5cb4ddc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
I0216 16:50:19.397513 1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-wtrmv"
I0216 16:50:19.439508 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="63.159145ms"
I0216 16:50:19.464780 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="25.221266ms"
I0216 16:50:19.506313 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="41.483252ms"
I0216 16:50:19.506631 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="107.022µs"
I0216 16:50:19.637413 1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-7fd5cb4ddc-42pt5"
I0216 16:50:19.654384 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="116.209293ms"
I0216 16:50:19.717293 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="62.862142ms"
I0216 16:50:19.775937 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="58.56977ms"
I0216 16:50:19.776065 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="60.94µs"
==> kube-proxy [7866bec9824b] <==
I0216 16:49:07.220878 1 server_others.go:69] "Using iptables proxy"
I0216 16:49:09.275011 1 node.go:141] Successfully retrieved node IP: 192.168.39.243
I0216 16:49:09.378847 1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
I0216 16:49:09.378869 1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I0216 16:49:09.384130 1 server_others.go:152] "Using iptables Proxier"
I0216 16:49:09.384491 1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I0216 16:49:09.385662 1 server.go:846] "Version info" version="v1.28.4"
I0216 16:49:09.385743 1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0216 16:49:09.393507 1 config.go:188] "Starting service config controller"
I0216 16:49:09.393886 1 shared_informer.go:311] Waiting for caches to sync for service config
I0216 16:49:09.394250 1 config.go:97] "Starting endpoint slice config controller"
I0216 16:49:09.394403 1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
I0216 16:49:09.403529 1 config.go:315] "Starting node config controller"
I0216 16:49:09.404039 1 shared_informer.go:311] Waiting for caches to sync for node config
I0216 16:49:09.495106 1 shared_informer.go:318] Caches are synced for endpoint slice config
I0216 16:49:09.495227 1 shared_informer.go:318] Caches are synced for service config
I0216 16:49:09.505338 1 shared_informer.go:318] Caches are synced for node config
==> kube-proxy [ad1c9395696e] <==
I0216 16:49:54.406219 1 server_others.go:69] "Using iptables proxy"
I0216 16:49:54.462123 1 node.go:141] Successfully retrieved node IP: 192.168.39.243
I0216 16:49:54.530988 1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
I0216 16:49:54.531017 1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I0216 16:49:54.535370 1 server_others.go:152] "Using iptables Proxier"
I0216 16:49:54.535425 1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I0216 16:49:54.535673 1 server.go:846] "Version info" version="v1.28.4"
I0216 16:49:54.535684 1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0216 16:49:54.537251 1 config.go:188] "Starting service config controller"
I0216 16:49:54.537269 1 shared_informer.go:311] Waiting for caches to sync for service config
I0216 16:49:54.537288 1 config.go:97] "Starting endpoint slice config controller"
I0216 16:49:54.537291 1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
I0216 16:49:54.537741 1 config.go:315] "Starting node config controller"
I0216 16:49:54.537747 1 shared_informer.go:311] Waiting for caches to sync for node config
I0216 16:49:54.638377 1 shared_informer.go:318] Caches are synced for node config
I0216 16:49:54.638403 1 shared_informer.go:318] Caches are synced for service config
I0216 16:49:54.638440 1 shared_informer.go:318] Caches are synced for endpoint slice config
==> kube-scheduler [c51a4f9114da] <==
I0216 16:49:06.217797 1 serving.go:348] Generated self-signed cert in-memory
W0216 16:49:09.206176 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0216 16:49:09.206223 1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0216 16:49:09.206234 1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
W0216 16:49:09.206240 1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0216 16:49:09.228459 1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
I0216 16:49:09.228678 1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0216 16:49:09.232679 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0216 16:49:09.235423 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0216 16:49:09.236033 1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
I0216 16:49:09.236203 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0216 16:49:09.335834 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0216 16:49:32.220409 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
I0216 16:49:32.220532 1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
E0216 16:49:32.220782 1 run.go:74] "command failed" err="finished without leader elect"
==> kube-scheduler [f88136ecdec0] <==
I0216 16:49:49.983882 1 serving.go:348] Generated self-signed cert in-memory
I0216 16:49:52.103642 1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
I0216 16:49:52.103662 1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0216 16:49:52.108235 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
I0216 16:49:52.108258 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
I0216 16:49:52.108303 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0216 16:49:52.108320 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0216 16:49:52.108346 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I0216 16:49:52.108350 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0216 16:49:52.108934 1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
I0216 16:49:52.108991 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0216 16:49:52.208931 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0216 16:49:52.209307 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
I0216 16:49:52.209914 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
==> kubelet <==
-- Journal begins at Fri 2024-02-16 16:47:48 UTC, ends at Fri 2024-02-16 16:50:22 UTC. --
Feb 16 16:50:13 functional-252045 kubelet[7331]: E0216 16:50:13.356206 7331 kuberuntime_image.go:53] "Failed to pull image" err="Error response from daemon: pull access denied for nonexistingimage, repository does not exist or may require 'docker login': denied: requested access to the resource is denied" image="nonexistingimage:latest"
Feb 16 16:50:13 functional-252045 kubelet[7331]: E0216 16:50:13.356323 7331 kuberuntime_manager.go:1261] container &Container{Name:nginx,Image:nonexistingimage:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rnbpm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod invalid-svc_default(c3f7be9d-42b8-4308-9671-0fc9f48a460f):
ErrImagePull: Error response from daemon: pull access denied for nonexistingimage, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
Feb 16 16:50:13 functional-252045 kubelet[7331]: E0216 16:50:13.356407 7331 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"Error response from daemon: pull access denied for nonexistingimage, repository does not exist or may require 'docker login': denied: requested access to the resource is denied\"" pod="default/invalid-svc" podUID="c3f7be9d-42b8-4308-9671-0fc9f48a460f"
Feb 16 16:50:13 functional-252045 kubelet[7331]: E0216 16:50:13.836863 7331 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nonexistingimage:latest\\\"\"" pod="default/invalid-svc" podUID="c3f7be9d-42b8-4308-9671-0fc9f48a460f"
Feb 16 16:50:15 functional-252045 kubelet[7331]: I0216 16:50:15.242506 7331 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnbpm\" (UniqueName: \"kubernetes.io/projected/c3f7be9d-42b8-4308-9671-0fc9f48a460f-kube-api-access-rnbpm\") pod \"c3f7be9d-42b8-4308-9671-0fc9f48a460f\" (UID: \"c3f7be9d-42b8-4308-9671-0fc9f48a460f\") "
Feb 16 16:50:15 functional-252045 kubelet[7331]: I0216 16:50:15.248223 7331 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3f7be9d-42b8-4308-9671-0fc9f48a460f-kube-api-access-rnbpm" (OuterVolumeSpecName: "kube-api-access-rnbpm") pod "c3f7be9d-42b8-4308-9671-0fc9f48a460f" (UID: "c3f7be9d-42b8-4308-9671-0fc9f48a460f"). InnerVolumeSpecName "kube-api-access-rnbpm". PluginName "kubernetes.io/projected", VolumeGidValue ""
Feb 16 16:50:15 functional-252045 kubelet[7331]: I0216 16:50:15.343158 7331 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-rnbpm\" (UniqueName: \"kubernetes.io/projected/c3f7be9d-42b8-4308-9671-0fc9f48a460f-kube-api-access-rnbpm\") on node \"functional-252045\" DevicePath \"\""
Feb 16 16:50:16 functional-252045 kubelet[7331]: I0216 16:50:16.081571 7331 topology_manager.go:215] "Topology Admit Handler" podUID="c7035912-8bcd-4d9f-bd93-2aa0b3d17679" podNamespace="default" podName="hello-node-d7447cc7f-zx5gj"
Feb 16 16:50:16 functional-252045 kubelet[7331]: W0216 16:50:16.088769 7331 reflector.go:535] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:functional-252045" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'functional-252045' and this object
Feb 16 16:50:16 functional-252045 kubelet[7331]: E0216 16:50:16.088813 7331 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:functional-252045" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'functional-252045' and this object
Feb 16 16:50:16 functional-252045 kubelet[7331]: I0216 16:50:16.252409 7331 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cvx7\" (UniqueName: \"kubernetes.io/projected/c7035912-8bcd-4d9f-bd93-2aa0b3d17679-kube-api-access-5cvx7\") pod \"hello-node-d7447cc7f-zx5gj\" (UID: \"c7035912-8bcd-4d9f-bd93-2aa0b3d17679\") " pod="default/hello-node-d7447cc7f-zx5gj"
Feb 16 16:50:16 functional-252045 kubelet[7331]: I0216 16:50:16.917083 7331 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c3f7be9d-42b8-4308-9671-0fc9f48a460f" path="/var/lib/kubelet/pods/c3f7be9d-42b8-4308-9671-0fc9f48a460f/volumes"
Feb 16 16:50:18 functional-252045 kubelet[7331]: I0216 16:50:18.276146 7331 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2135eb44f3adf9b306c40b5bd04eb4d61ddf296a74fa77e53bb7d6d16e9be80"
Feb 16 16:50:19 functional-252045 kubelet[7331]: I0216 16:50:19.173096 7331 topology_manager.go:215] "Topology Admit Handler" podUID="3c7cc9d1-060b-4c76-b5c7-8013cca176a5" podNamespace="default" podName="busybox-mount"
Feb 16 16:50:19 functional-252045 kubelet[7331]: I0216 16:50:19.278051 7331 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/3c7cc9d1-060b-4c76-b5c7-8013cca176a5-test-volume\") pod \"busybox-mount\" (UID: \"3c7cc9d1-060b-4c76-b5c7-8013cca176a5\") " pod="default/busybox-mount"
Feb 16 16:50:19 functional-252045 kubelet[7331]: I0216 16:50:19.278103 7331 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm8cj\" (UniqueName: \"kubernetes.io/projected/3c7cc9d1-060b-4c76-b5c7-8013cca176a5-kube-api-access-gm8cj\") pod \"busybox-mount\" (UID: \"3c7cc9d1-060b-4c76-b5c7-8013cca176a5\") " pod="default/busybox-mount"
Feb 16 16:50:19 functional-252045 kubelet[7331]: I0216 16:50:19.415426 7331 topology_manager.go:215] "Topology Admit Handler" podUID="edcf8f73-4124-4f73-8a57-2aff9c1d2efd" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-wtrmv"
Feb 16 16:50:19 functional-252045 kubelet[7331]: I0216 16:50:19.583420 7331 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvht8\" (UniqueName: \"kubernetes.io/projected/edcf8f73-4124-4f73-8a57-2aff9c1d2efd-kube-api-access-dvht8\") pod \"kubernetes-dashboard-8694d4445c-wtrmv\" (UID: \"edcf8f73-4124-4f73-8a57-2aff9c1d2efd\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-wtrmv"
Feb 16 16:50:19 functional-252045 kubelet[7331]: I0216 16:50:19.583533 7331 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/edcf8f73-4124-4f73-8a57-2aff9c1d2efd-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-wtrmv\" (UID: \"edcf8f73-4124-4f73-8a57-2aff9c1d2efd\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-wtrmv"
Feb 16 16:50:19 functional-252045 kubelet[7331]: I0216 16:50:19.659441 7331 topology_manager.go:215] "Topology Admit Handler" podUID="44655129-a25e-45af-8ae1-3a7799523203" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-7fd5cb4ddc-42pt5"
Feb 16 16:50:19 functional-252045 kubelet[7331]: I0216 16:50:19.785813 7331 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4m8r\" (UniqueName: \"kubernetes.io/projected/44655129-a25e-45af-8ae1-3a7799523203-kube-api-access-s4m8r\") pod \"dashboard-metrics-scraper-7fd5cb4ddc-42pt5\" (UID: \"44655129-a25e-45af-8ae1-3a7799523203\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc-42pt5"
Feb 16 16:50:19 functional-252045 kubelet[7331]: I0216 16:50:19.785884 7331 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/44655129-a25e-45af-8ae1-3a7799523203-tmp-volume\") pod \"dashboard-metrics-scraper-7fd5cb4ddc-42pt5\" (UID: \"44655129-a25e-45af-8ae1-3a7799523203\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc-42pt5"
Feb 16 16:50:21 functional-252045 kubelet[7331]: I0216 16:50:21.725441 7331 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9c7cce124909defd340cfb5c6fffe438d962cce300c41dda3cd6a7b02ca916e"
Feb 16 16:50:21 functional-252045 kubelet[7331]: I0216 16:50:21.790534 7331 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b71d5fa00e1133e4ec46cba6af194309f0afa9ec5a0cc095a8edb70be453a28"
Feb 16 16:50:21 functional-252045 kubelet[7331]: I0216 16:50:21.827990 7331 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="203eb244c06269e8999c1074cf42b0cd55cc7f8516344985f47fd4da59f5ddc8"
==> storage-provisioner [0e3511fe4dd8] <==
I0216 16:50:12.182047 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0216 16:50:12.195311 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0216 16:50:12.195381 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
==> storage-provisioner [a7450fd655d3] <==
I0216 16:49:54.773649 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F0216 16:49:54.785697 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-252045 -n functional-252045
helpers_test.go:261: (dbg) Run: kubectl --context functional-252045 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount hello-node-d7447cc7f-zx5gj dashboard-metrics-scraper-7fd5cb4ddc-42pt5 kubernetes-dashboard-8694d4445c-wtrmv
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context functional-252045 describe pod busybox-mount hello-node-d7447cc7f-zx5gj dashboard-metrics-scraper-7fd5cb4ddc-42pt5 kubernetes-dashboard-8694d4445c-wtrmv
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-252045 describe pod busybox-mount hello-node-d7447cc7f-zx5gj dashboard-metrics-scraper-7fd5cb4ddc-42pt5 kubernetes-dashboard-8694d4445c-wtrmv: exit status 1 (83.853688ms)
-- stdout --
Name: busybox-mount
Namespace: default
Priority: 0
Service Account: default
Node: functional-252045/192.168.39.243
Start Time: Fri, 16 Feb 2024 16:50:19 +0000
Labels: integration-test=busybox-mount
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Containers:
mount-munger:
Container ID:
Image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID:
Port: <none>
Host Port: <none>
Command:
/bin/sh
-c
--
Args:
cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/mount-9p from test-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gm8cj (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
test-volume:
Type: HostPath (bare host directory volume)
Path: /mount-9p
HostPathType:
kube-api-access-gm8cj:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3s default-scheduler Successfully assigned default/busybox-mount to functional-252045
Normal Pulling 2s kubelet Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Name: hello-node-d7447cc7f-zx5gj
Namespace: default
Priority: 0
Service Account: default
Node: functional-252045/192.168.39.243
Start Time: Fri, 16 Feb 2024 16:50:16 +0000
Labels: app=hello-node
pod-template-hash=d7447cc7f
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/hello-node-d7447cc7f
Containers:
echoserver:
Container ID:
Image: registry.k8s.io/echoserver:1.8
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5cvx7 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-5cvx7:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 6s default-scheduler Successfully assigned default/hello-node-d7447cc7f-zx5gj to functional-252045
Normal Pulling 4s kubelet Pulling image "registry.k8s.io/echoserver:1.8"
-- /stdout --
** stderr **
Error from server (NotFound): pods "dashboard-metrics-scraper-7fd5cb4ddc-42pt5" not found
Error from server (NotFound): pods "kubernetes-dashboard-8694d4445c-wtrmv" not found
** /stderr **
helpers_test.go:279: kubectl --context functional-252045 describe pod busybox-mount hello-node-d7447cc7f-zx5gj dashboard-metrics-scraper-7fd5cb4ddc-42pt5 kubernetes-dashboard-8694d4445c-wtrmv: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (5.73s)