=== RUN TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd
=== CONT TestFunctional/parallel/DashboardCmd
functional_test.go:900: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-20220412120837-7629 --alsologtostderr -v=1]
=== CONT TestFunctional/parallel/DashboardCmd
functional_test.go:913: output didn't produce a URL
functional_test.go:905: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-20220412120837-7629 --alsologtostderr -v=1] ...
functional_test.go:905: (dbg) [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-20220412120837-7629 --alsologtostderr -v=1] stdout:
functional_test.go:905: (dbg) [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-20220412120837-7629 --alsologtostderr -v=1] stderr:
I0412 12:11:56.900182 8665 out.go:297] Setting OutFile to fd 1 ...
I0412 12:11:56.900581 8665 out.go:344] TERM=,COLORTERM=, which probably does not support color
I0412 12:11:56.900588 8665 out.go:310] Setting ErrFile to fd 2...
I0412 12:11:56.900592 8665 out.go:344] TERM=,COLORTERM=, which probably does not support color
I0412 12:11:56.900721 8665 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--13812-6803-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
I0412 12:11:56.900951 8665 mustload.go:65] Loading cluster: functional-20220412120837-7629
I0412 12:11:56.901266 8665 config.go:178] Loaded profile config "functional-20220412120837-7629": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.23.5
I0412 12:11:56.901646 8665 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0412 12:11:56.901706 8665 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0412 12:11:56.909408 8665 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:57216
I0412 12:11:56.909982 8665 main.go:134] libmachine: () Calling .GetVersion
I0412 12:11:56.910482 8665 main.go:134] libmachine: Using API Version 1
I0412 12:11:56.910495 8665 main.go:134] libmachine: () Calling .SetConfigRaw
I0412 12:11:56.910770 8665 main.go:134] libmachine: () Calling .GetMachineName
I0412 12:11:56.910869 8665 main.go:134] libmachine: (functional-20220412120837-7629) Calling .GetState
I0412 12:11:56.911042 8665 main.go:134] libmachine: (functional-20220412120837-7629) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0412 12:11:56.911124 8665 main.go:134] libmachine: (functional-20220412120837-7629) DBG | hyperkit pid from json: 8097
I0412 12:11:56.911989 8665 host.go:66] Checking if "functional-20220412120837-7629" exists ...
I0412 12:11:56.912309 8665 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0412 12:11:56.912333 8665 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0412 12:11:56.920537 8665 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:57218
I0412 12:11:56.920890 8665 main.go:134] libmachine: () Calling .GetVersion
I0412 12:11:56.921242 8665 main.go:134] libmachine: Using API Version 1
I0412 12:11:56.921253 8665 main.go:134] libmachine: () Calling .SetConfigRaw
I0412 12:11:56.921496 8665 main.go:134] libmachine: () Calling .GetMachineName
I0412 12:11:56.921599 8665 main.go:134] libmachine: (functional-20220412120837-7629) Calling .DriverName
I0412 12:11:56.921698 8665 api_server.go:165] Checking apiserver status ...
I0412 12:11:56.921758 8665 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0412 12:11:56.921779 8665 main.go:134] libmachine: (functional-20220412120837-7629) Calling .GetSSHHostname
I0412 12:11:56.921864 8665 main.go:134] libmachine: (functional-20220412120837-7629) Calling .GetSSHPort
I0412 12:11:56.921948 8665 main.go:134] libmachine: (functional-20220412120837-7629) Calling .GetSSHKeyPath
I0412 12:11:56.922034 8665 main.go:134] libmachine: (functional-20220412120837-7629) Calling .GetSSHUsername
I0412 12:11:56.922120 8665 sshutil.go:53] new ssh client: &{IP:192.168.64.45 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--13812-6803-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/functional-20220412120837-7629/id_rsa Username:docker}
I0412 12:11:56.969142 8665 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/7083/cgroup
I0412 12:11:56.975321 8665 api_server.go:181] apiserver freezer: "6:freezer:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba05975c8f3c6c8e35a7d8c90e75c4c4.slice/docker-c1865f2d3a6213ec922d040ba36a26f25f521af8e83bf1ab8855f09f2c71c0f3.scope"
I0412 12:11:56.975413 8665 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba05975c8f3c6c8e35a7d8c90e75c4c4.slice/docker-c1865f2d3a6213ec922d040ba36a26f25f521af8e83bf1ab8855f09f2c71c0f3.scope/freezer.state
I0412 12:11:56.982764 8665 api_server.go:203] freezer state: "THAWED"
I0412 12:11:56.982811 8665 api_server.go:240] Checking apiserver healthz at https://192.168.64.45:8441/healthz ...
I0412 12:11:56.987836 8665 api_server.go:266] https://192.168.64.45:8441/healthz returned 200:
ok
W0412 12:11:56.987868 8665 out.go:241] * Enabling dashboard ...
* Enabling dashboard ...
I0412 12:11:56.988026 8665 config.go:178] Loaded profile config "functional-20220412120837-7629": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.23.5
I0412 12:11:56.988037 8665 addons.go:65] Setting dashboard=true in profile "functional-20220412120837-7629"
I0412 12:11:56.988047 8665 addons.go:153] Setting addon dashboard=true in "functional-20220412120837-7629"
I0412 12:11:56.988069 8665 host.go:66] Checking if "functional-20220412120837-7629" exists ...
I0412 12:11:56.988309 8665 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0412 12:11:56.988331 8665 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0412 12:11:56.995687 8665 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:57229
I0412 12:11:56.996163 8665 main.go:134] libmachine: () Calling .GetVersion
I0412 12:11:56.996538 8665 main.go:134] libmachine: Using API Version 1
I0412 12:11:56.996548 8665 main.go:134] libmachine: () Calling .SetConfigRaw
I0412 12:11:56.996800 8665 main.go:134] libmachine: () Calling .GetMachineName
I0412 12:11:56.997224 8665 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0412 12:11:56.997247 8665 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0412 12:11:57.004855 8665 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:57231
I0412 12:11:57.005210 8665 main.go:134] libmachine: () Calling .GetVersion
I0412 12:11:57.005531 8665 main.go:134] libmachine: Using API Version 1
I0412 12:11:57.005550 8665 main.go:134] libmachine: () Calling .SetConfigRaw
I0412 12:11:57.005737 8665 main.go:134] libmachine: () Calling .GetMachineName
I0412 12:11:57.005837 8665 main.go:134] libmachine: (functional-20220412120837-7629) Calling .GetState
I0412 12:11:57.005923 8665 main.go:134] libmachine: (functional-20220412120837-7629) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0412 12:11:57.006015 8665 main.go:134] libmachine: (functional-20220412120837-7629) DBG | hyperkit pid from json: 8097
I0412 12:11:57.006764 8665 main.go:134] libmachine: (functional-20220412120837-7629) Calling .DriverName
I0412 12:11:57.070619 8665 out.go:176] - Using image kubernetesui/metrics-scraper:v1.0.7
I0412 12:11:57.096651 8665 out.go:176] - Using image kubernetesui/dashboard:v2.5.1
I0412 12:11:57.096779 8665 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0412 12:11:57.096801 8665 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0412 12:11:57.096822 8665 main.go:134] libmachine: (functional-20220412120837-7629) Calling .GetSSHHostname
I0412 12:11:57.097129 8665 main.go:134] libmachine: (functional-20220412120837-7629) Calling .GetSSHPort
I0412 12:11:57.097351 8665 main.go:134] libmachine: (functional-20220412120837-7629) Calling .GetSSHKeyPath
I0412 12:11:57.097597 8665 main.go:134] libmachine: (functional-20220412120837-7629) Calling .GetSSHUsername
I0412 12:11:57.097824 8665 sshutil.go:53] new ssh client: &{IP:192.168.64.45 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--13812-6803-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/functional-20220412120837-7629/id_rsa Username:docker}
I0412 12:11:57.155205 8665 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0412 12:11:57.155229 8665 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0412 12:11:57.169915 8665 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0412 12:11:57.169926 8665 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0412 12:11:57.180988 8665 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0412 12:11:57.181013 8665 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0412 12:11:57.192849 8665 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0412 12:11:57.192860 8665 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4278 bytes)
I0412 12:11:57.205654 8665 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
I0412 12:11:57.205680 8665 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0412 12:11:57.219150 8665 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0412 12:11:57.219162 8665 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0412 12:11:57.230901 8665 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0412 12:11:57.230912 8665 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0412 12:11:57.242470 8665 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0412 12:11:57.242480 8665 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0412 12:11:57.253802 8665 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0412 12:11:57.253814 8665 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0412 12:11:57.273631 8665 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0412 12:11:57.629719 8665 main.go:134] libmachine: Making call to close driver server
I0412 12:11:57.629749 8665 main.go:134] libmachine: (functional-20220412120837-7629) Calling .Close
I0412 12:11:57.630061 8665 main.go:134] libmachine: Successfully made call to close driver server
I0412 12:11:57.630070 8665 main.go:134] libmachine: Making call to close connection to plugin binary
I0412 12:11:57.630079 8665 main.go:134] libmachine: Making call to close driver server
I0412 12:11:57.630085 8665 main.go:134] libmachine: (functional-20220412120837-7629) DBG | Closing plugin on server side
I0412 12:11:57.630088 8665 main.go:134] libmachine: (functional-20220412120837-7629) Calling .Close
I0412 12:11:57.630198 8665 main.go:134] libmachine: Successfully made call to close driver server
I0412 12:11:57.630210 8665 main.go:134] libmachine: Making call to close connection to plugin binary
I0412 12:11:57.630222 8665 addons.go:116] Writing out "functional-20220412120837-7629" config to set dashboard=true...
I0412 12:11:57.630247 8665 main.go:134] libmachine: (functional-20220412120837-7629) DBG | Closing plugin on server side
W0412 12:11:57.630909 8665 out.go:241] * Verifying dashboard health ...
* Verifying dashboard health ...
I0412 12:11:57.631683 8665 kapi.go:59] client config for functional-20220412120837-7629: &rest.Config{Host:"https://192.168.64.45:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--13812-6803-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412120837-7629/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--13812-6803-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412
120837-7629/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--13812-6803-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2220f80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0412 12:11:57.640082 8665 service.go:214] Found service: &Service{ObjectMeta:{kubernetes-dashboard kubernetes-dashboard 0129260f-c52c-45f5-87e5-f49d903f4925 840 0 2022-04-12 12:11:57 -0700 PDT <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2022-04-12 12:11:57 -0700 PDT FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.100.148.163,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.100.148.163],IPFamilies:[IPv4],AllocateLoadBala
ncerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W0412 12:11:57.640195 8665 out.go:241] * Launching proxy ...
* Launching proxy ...
I0412 12:11:57.640284 8665 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-20220412120837-7629 proxy --port 36195]
I0412 12:11:57.642218 8665 dashboard.go:157] Waiting for kubectl to output host:port ...
I0412 12:11:57.682111 8665 dashboard.go:175] proxy stdout: Starting to serve on 127.0 0.1:36195
W0412 12:11:57.682160 8665 out.go:241] * Verifying proxy health ...
* Verifying proxy health ...
I0412 12:11:57.682175 8665 dashboard.go:212] http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0412 12:11:57.682286 8665 retry.go:31] will retry after 110.466µs: checkURL: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0412 12:11:57.682490 8665 dashboard.go:212] http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0412 12:11:57.682513 8665 retry.go:31] will retry after 216.077µs: checkURL: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0412 12:11:57.682849 8665 dashboard.go:212] http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0412 12:11:57.682863 8665 retry.go:31] will retry after 262.026µs: checkURL: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0412 12:11:57.683283 8665 dashboard.go:212] http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0412 12:11:57.683299 8665 retry.go:31] will retry after 316.478µs: checkURL: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0412 12:11:57.683717 8665 dashboard.go:212] http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0412 12:11:57.683734 8665 retry.go:31] will retry after 468.098µs: checkURL: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0412 12:11:57.684456 8665 dashboard.go:212] http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0412 12:11:57.684484 8665 retry.go:31] will retry after 901.244µs: checkURL: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0412 12:11:57.685844 8665 dashboard.go:212] http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0412 12:11:57.685857 8665 retry.go:31] will retry after 644.295µs: checkURL: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0412 12:11:57.686852 8665 dashboard.go:212] http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0412 12:11:57.686865 8665 retry.go:31] will retry after 1.121724ms: checkURL: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0412 12:11:57.688034 8665 dashboard.go:212] http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0412 12:11:57.688048 8665 retry.go:31] will retry after 1.529966ms: checkURL: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0412 12:11:57.690376 8665 dashboard.go:212] http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0412 12:11:57.690393 8665 retry.go:31] will retry after 3.078972ms: checkURL: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0412 12:11:57.693563 8665 dashboard.go:212] http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0412 12:11:57.693579 8665 retry.go:31] will retry after 5.854223ms: checkURL: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0412 12:11:57.699558 8665 dashboard.go:212] http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0412 12:11:57.699597 8665 retry.go:31] will retry after 11.362655ms: checkURL: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0412 12:11:57.711218 8665 dashboard.go:212] http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0412 12:11:57.711247 8665 retry.go:31] will retry after 9.267303ms: checkURL: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0412 12:11:57.721079 8665 dashboard.go:212] http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0412 12:11:57.721109 8665 retry.go:31] will retry after 17.139291ms: checkURL: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0412 12:11:57.742128 8665 dashboard.go:212] http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0412 12:11:57.742186 8665 retry.go:31] will retry after 23.881489ms: checkURL: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0412 12:11:57.773981 8665 dashboard.go:212] http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0412 12:11:57.774058 8665 retry.go:31] will retry after 42.427055ms: checkURL: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0412 12:11:57.824634 8665 dashboard.go:212] http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0412 12:11:57.824675 8665 retry.go:31] will retry after 51.432832ms: checkURL: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0412 12:11:57.878500 8665 dashboard.go:212] http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0412 12:11:57.878530 8665 retry.go:31] will retry after 78.14118ms: checkURL: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0412 12:11:57.964341 8665 dashboard.go:212] http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0412 12:11:57.964379 8665 retry.go:31] will retry after 174.255803ms: checkURL: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0412 12:11:58.141809 8665 dashboard.go:212] http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0412 12:11:58.141871 8665 retry.go:31] will retry after 159.291408ms: checkURL: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0412 12:11:58.306751 8665 dashboard.go:212] http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0412 12:11:58.306785 8665 retry.go:31] will retry after 233.827468ms: checkURL: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0412 12:11:58.541076 8665 dashboard.go:212] http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0412 12:11:58.541106 8665 retry.go:31] will retry after 429.392365ms: checkURL: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0412 12:11:58.977857 8665 dashboard.go:212] http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0412 12:11:58.977908 8665 retry.go:31] will retry after 801.058534ms: checkURL: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0412 12:11:59.781675 8665 dashboard.go:212] http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0412 12:11:59.781706 8665 retry.go:31] will retry after 1.529087469s: checkURL: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0412 12:12:01.312676 8665 dashboard.go:212] http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0412 12:12:01.312713 8665 retry.go:31] will retry after 1.335136154s: checkURL: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0412 12:12:02.647944 8665 dashboard.go:212] http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0412 12:12:02.648001 8665 retry.go:31] will retry after 2.012724691s: checkURL: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0412 12:12:04.663561 8665 dashboard.go:212] http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0412 12:12:04.663599 8665 retry.go:31] will retry after 4.744335389s: checkURL: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0412 12:12:09.412289 8665 dashboard.go:212] http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0412 12:12:09.412380 8665 retry.go:31] will retry after 4.014454686s: checkURL: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0412 12:12:13.432734 8665 dashboard.go:212] http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0412 12:12:13.432808 8665 retry.go:31] will retry after 11.635741654s: checkURL: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0412 12:12:25.077022 8665 dashboard.go:212] http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0412 12:12:25.077089 8665 retry.go:31] will retry after 15.298130033s: checkURL: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0412 12:12:40.382636 8665 dashboard.go:212] http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0412 12:12:40.382732 8665 retry.go:31] will retry after 19.631844237s: checkURL: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0412 12:13:00.015779 8665 dashboard.go:212] http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0412 12:13:00.015859 8665 retry.go:31] will retry after 15.195386994s: checkURL: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0412 12:13:15.212670 8665 dashboard.go:212] http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0412 12:13:15.212785 8665 retry.go:31] will retry after 28.402880652s: checkURL: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0412 12:13:43.625494 8665 dashboard.go:212] http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0412 12:13:43.625570 8665 retry.go:31] will retry after 1m6.435206373s: checkURL: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0412 12:14:50.063150 8665 dashboard.go:212] http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0412 12:14:50.063211 8665 retry.go:31] will retry after 1m28.514497132s: checkURL: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0412 12:16:18.587089 8665 dashboard.go:212] http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0412 12:16:18.587132 8665 retry.go:31] will retry after 34.767217402s: checkURL: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
I0412 12:16:53.363044 8665 dashboard.go:212] http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name <nil>
I0412 12:16:53.363153 8665 retry.go:31] will retry after 1m5.688515861s: checkURL: parse "http://127.0 0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": invalid character " " in host name
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20220412120837-7629 -n functional-20220412120837-7629
helpers_test.go:244: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-darwin-amd64 -p functional-20220412120837-7629 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220412120837-7629 logs -n 25: (2.579183746s)
helpers_test.go:252: TestFunctional/parallel/DashboardCmd logs:
-- stdout --
*
* ==> Audit <==
* |---------|---------------------------------------------------|--------------------------------|---------|---------|-------------------------------|-------------------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|---------------------------------------------------|--------------------------------|---------|---------|-------------------------------|-------------------------------|
| -p | functional-20220412120837-7629 | functional-20220412120837-7629 | jenkins | v1.25.2 | Tue, 12 Apr 2022 12:11:30 PDT | Tue, 12 Apr 2022 12:11:30 PDT |
| | addons list -o json | | | | | |
| -p | functional-20220412120837-7629 | functional-20220412120837-7629 | jenkins | v1.25.2 | Tue, 12 Apr 2022 12:11:45 PDT | Tue, 12 Apr 2022 12:11:45 PDT |
| | service hello-node-connect | | | | | |
| | --url | | | | | |
| -p | functional-20220412120837-7629 | functional-20220412120837-7629 | jenkins | v1.25.2 | Tue, 12 Apr 2022 12:11:53 PDT | Tue, 12 Apr 2022 12:11:53 PDT |
| | ssh findmnt -T /mount-9p | | | | | | |
| | grep 9p | | | | | |
| -p | functional-20220412120837-7629 | functional-20220412120837-7629 | jenkins | v1.25.2 | Tue, 12 Apr 2022 12:11:53 PDT | Tue, 12 Apr 2022 12:11:53 PDT |
| | ssh -- ls -la /mount-9p | | | | | |
| -p | functional-20220412120837-7629 | functional-20220412120837-7629 | jenkins | v1.25.2 | Tue, 12 Apr 2022 12:11:53 PDT | Tue, 12 Apr 2022 12:11:53 PDT |
| | ssh cat | | | | | |
| | /mount-9p/test-1649790712601894000 | | | | | |
| -p | functional-20220412120837-7629 | functional-20220412120837-7629 | jenkins | v1.25.2 | Tue, 12 Apr 2022 12:11:53 PDT | Tue, 12 Apr 2022 12:11:54 PDT |
| | service list | | | | | |
| -p | functional-20220412120837-7629 | functional-20220412120837-7629 | jenkins | v1.25.2 | Tue, 12 Apr 2022 12:11:54 PDT | Tue, 12 Apr 2022 12:11:54 PDT |
| | service --namespace=default | | | | | |
| | --https --url hello-node | | | | | |
| -p | functional-20220412120837-7629 | functional-20220412120837-7629 | jenkins | v1.25.2 | Tue, 12 Apr 2022 12:11:54 PDT | Tue, 12 Apr 2022 12:11:55 PDT |
| | service hello-node --url | | | | | |
| | --format={{.IP}} | | | | | |
| -p | functional-20220412120837-7629 | functional-20220412120837-7629 | jenkins | v1.25.2 | Tue, 12 Apr 2022 12:11:55 PDT | Tue, 12 Apr 2022 12:11:55 PDT |
| | service hello-node --url | | | | | |
| -p | functional-20220412120837-7629 | functional-20220412120837-7629 | jenkins | v1.25.2 | Tue, 12 Apr 2022 12:11:56 PDT | Tue, 12 Apr 2022 12:11:57 PDT |
| | ssh stat | | | | | |
| | /mount-9p/created-by-test | | | | | |
| -p | functional-20220412120837-7629 | functional-20220412120837-7629 | jenkins | v1.25.2 | Tue, 12 Apr 2022 12:11:57 PDT | Tue, 12 Apr 2022 12:11:57 PDT |
| | ssh stat | | | | | |
| | /mount-9p/created-by-pod | | | | | |
| -p | functional-20220412120837-7629 | functional-20220412120837-7629 | jenkins | v1.25.2 | Tue, 12 Apr 2022 12:11:57 PDT | Tue, 12 Apr 2022 12:11:57 PDT |
| | ssh sudo umount -f /mount-9p | | | | | |
| -p | functional-20220412120837-7629 | functional-20220412120837-7629 | jenkins | v1.25.2 | Tue, 12 Apr 2022 12:11:58 PDT | Tue, 12 Apr 2022 12:11:58 PDT |
| | ssh findmnt -T /mount-9p | | | | | | |
| | grep 9p | | | | | |
| -p | functional-20220412120837-7629 | functional-20220412120837-7629 | jenkins | v1.25.2 | Tue, 12 Apr 2022 12:11:58 PDT | Tue, 12 Apr 2022 12:11:58 PDT |
| | ssh -- ls -la /mount-9p | | | | | |
| -p | functional-20220412120837-7629 | functional-20220412120837-7629 | jenkins | v1.25.2 | Tue, 12 Apr 2022 12:11:59 PDT | Tue, 12 Apr 2022 12:11:59 PDT |
| | version --short | | | | | |
| -p | functional-20220412120837-7629 | functional-20220412120837-7629 | jenkins | v1.25.2 | Tue, 12 Apr 2022 12:11:59 PDT | Tue, 12 Apr 2022 12:11:59 PDT |
| | version -o=json --components | | | | | |
| -p | functional-20220412120837-7629 | functional-20220412120837-7629 | jenkins | v1.25.2 | Tue, 12 Apr 2022 12:11:59 PDT | Tue, 12 Apr 2022 12:11:59 PDT |
| | update-context | | | | | |
| | --alsologtostderr -v=2 | | | | | |
| -p | functional-20220412120837-7629 | functional-20220412120837-7629 | jenkins | v1.25.2 | Tue, 12 Apr 2022 12:12:00 PDT | Tue, 12 Apr 2022 12:12:00 PDT |
| | update-context | | | | | |
| | --alsologtostderr -v=2 | | | | | |
| -p | functional-20220412120837-7629 | functional-20220412120837-7629 | jenkins | v1.25.2 | Tue, 12 Apr 2022 12:12:00 PDT | Tue, 12 Apr 2022 12:12:00 PDT |
| | update-context | | | | | |
| | --alsologtostderr -v=2 | | | | | |
| -p | functional-20220412120837-7629 | functional-20220412120837-7629 | jenkins | v1.25.2 | Tue, 12 Apr 2022 12:12:00 PDT | Tue, 12 Apr 2022 12:12:00 PDT |
| | image ls --format short | | | | | |
| -p | functional-20220412120837-7629 | functional-20220412120837-7629 | jenkins | v1.25.2 | Tue, 12 Apr 2022 12:12:00 PDT | Tue, 12 Apr 2022 12:12:00 PDT |
| | image ls --format yaml | | | | | |
| -p | functional-20220412120837-7629 image build -t | functional-20220412120837-7629 | jenkins | v1.25.2 | Tue, 12 Apr 2022 12:12:00 PDT | Tue, 12 Apr 2022 12:12:03 PDT |
| | localhost/my-image:functional-20220412120837-7629 | | | | | |
| | testdata/build | | | | | |
| -p | functional-20220412120837-7629 | functional-20220412120837-7629 | jenkins | v1.25.2 | Tue, 12 Apr 2022 12:12:03 PDT | Tue, 12 Apr 2022 12:12:03 PDT |
| | image ls | | | | | |
| -p | functional-20220412120837-7629 | functional-20220412120837-7629 | jenkins | v1.25.2 | Tue, 12 Apr 2022 12:12:03 PDT | Tue, 12 Apr 2022 12:12:03 PDT |
| | image ls --format json | | | | | |
| -p | functional-20220412120837-7629 | functional-20220412120837-7629 | jenkins | v1.25.2 | Tue, 12 Apr 2022 12:12:03 PDT | Tue, 12 Apr 2022 12:12:03 PDT |
| | image ls --format table | | | | | |
|---------|---------------------------------------------------|--------------------------------|---------|---------|-------------------------------|-------------------------------|
*
* ==> Last Start <==
* Log file created at: 2022/04/12 12:11:56
Running on machine: administrators-Mac-mini
Binary: Built with gc go1.18 for darwin/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0412 12:11:56.502762 8656 out.go:297] Setting OutFile to fd 1 ...
I0412 12:11:56.502898 8656 out.go:344] TERM=,COLORTERM=, which probably does not support color
I0412 12:11:56.502903 8656 out.go:310] Setting ErrFile to fd 2...
I0412 12:11:56.502907 8656 out.go:344] TERM=,COLORTERM=, which probably does not support color
I0412 12:11:56.503018 8656 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--13812-6803-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
I0412 12:11:56.503272 8656 out.go:304] Setting JSON to false
I0412 12:11:56.517548 8656 start.go:115] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":4291,"bootTime":1649786425,"procs":356,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.1","kernelVersion":"20.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
W0412 12:11:56.517644 8656 start.go:123] gopshost.Virtualization returned error: not implemented yet
I0412 12:11:56.544785 8656 out.go:176] * [functional-20220412120837-7629] minikube v1.25.2 on Darwin 11.1
I0412 12:11:56.570267 8656 out.go:176] - MINIKUBE_LOCATION=13812
I0412 12:11:56.596171 8656 out.go:176] - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--13812-6803-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
I0412 12:11:56.626379 8656 out.go:176] - MINIKUBE_BIN=out/minikube-darwin-amd64
I0412 12:11:56.652333 8656 out.go:176] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0412 12:11:56.678422 8656 out.go:176] - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--13812-6803-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
I0412 12:11:56.679232 8656 config.go:178] Loaded profile config "functional-20220412120837-7629": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.23.5
I0412 12:11:56.679996 8656 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0412 12:11:56.680106 8656 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0412 12:11:56.688064 8656 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:57209
I0412 12:11:56.688513 8656 main.go:134] libmachine: () Calling .GetVersion
I0412 12:11:56.688929 8656 main.go:134] libmachine: Using API Version 1
I0412 12:11:56.688940 8656 main.go:134] libmachine: () Calling .SetConfigRaw
I0412 12:11:56.689161 8656 main.go:134] libmachine: () Calling .GetMachineName
I0412 12:11:56.689262 8656 main.go:134] libmachine: (functional-20220412120837-7629) Calling .DriverName
I0412 12:11:56.689381 8656 driver.go:346] Setting default libvirt URI to qemu:///system
I0412 12:11:56.689662 8656 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0412 12:11:56.689684 8656 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0412 12:11:56.696530 8656 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:57211
I0412 12:11:56.696875 8656 main.go:134] libmachine: () Calling .GetVersion
I0412 12:11:56.697206 8656 main.go:134] libmachine: Using API Version 1
I0412 12:11:56.697221 8656 main.go:134] libmachine: () Calling .SetConfigRaw
I0412 12:11:56.697424 8656 main.go:134] libmachine: () Calling .GetMachineName
I0412 12:11:56.697506 8656 main.go:134] libmachine: (functional-20220412120837-7629) Calling .DriverName
I0412 12:11:56.744768 8656 out.go:176] * Using the hyperkit driver based on existing profile
I0412 12:11:56.744810 8656 start.go:284] selected driver: hyperkit
I0412 12:11:56.744825 8656 start.go:801] validating driver "hyperkit" against &{Name:functional-20220412120837-7629 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/13659/minikube-v1.25.2-1649577058-13659.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.23.5 ClusterName:functional-20220412120837-7629 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.64.45 Port:8441 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plug
in:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
I0412 12:11:56.745032 8656 start.go:812] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0412 12:11:56.748189 8656 cni.go:93] Creating CNI manager for ""
I0412 12:11:56.748211 8656 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
I0412 12:11:56.748224 8656 start_flags.go:306] config:
{Name:functional-20220412120837-7629 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/13659/minikube-v1.25.2-1649577058-13659.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:functional-20220412120837-7629 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.64.45 Port:8441 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
*
* ==> Docker <==
* -- Journal begins at Tue 2022-04-12 19:08:45 UTC, ends at Tue 2022-04-12 19:16:57 UTC. --
Apr 12 19:11:54 functional-20220412120837-7629 dockerd[2260]: time="2022-04-12T19:11:54.450314470Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/c45a15678f696becfaa07d9cf89bf91a38ae31266ebdbf76ec696e4014283393 pid=9850
Apr 12 19:11:56 functional-20220412120837-7629 dockerd[2260]: time="2022-04-12T19:11:56.111678164Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/a4fb6b97ea91481193e0d0658eb5af7580c3f1891b694b503e177fe20dca98fa pid=9966
Apr 12 19:11:56 functional-20220412120837-7629 dockerd[2253]: time="2022-04-12T19:11:56.272240094Z" level=info msg="ignoring event" container=a4fb6b97ea91481193e0d0658eb5af7580c3f1891b694b503e177fe20dca98fa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 12 19:11:56 functional-20220412120837-7629 dockerd[2260]: time="2022-04-12T19:11:56.272726017Z" level=info msg="shim disconnected" id=a4fb6b97ea91481193e0d0658eb5af7580c3f1891b694b503e177fe20dca98fa
Apr 12 19:11:56 functional-20220412120837-7629 dockerd[2260]: time="2022-04-12T19:11:56.273270300Z" level=warning msg="cleaning up after shim disconnected" id=a4fb6b97ea91481193e0d0658eb5af7580c3f1891b694b503e177fe20dca98fa namespace=moby
Apr 12 19:11:56 functional-20220412120837-7629 dockerd[2260]: time="2022-04-12T19:11:56.273352549Z" level=info msg="cleaning up dead shim"
Apr 12 19:11:56 functional-20220412120837-7629 dockerd[2260]: time="2022-04-12T19:11:56.282840353Z" level=warning msg="cleanup warnings time=\"2022-04-12T19:11:56Z\" level=info msg=\"starting signal loop\" namespace=moby pid=10016\n"
Apr 12 19:11:56 functional-20220412120837-7629 dockerd[2253]: time="2022-04-12T19:11:56.912234574Z" level=info msg="ignoring event" container=c45a15678f696becfaa07d9cf89bf91a38ae31266ebdbf76ec696e4014283393 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 12 19:11:56 functional-20220412120837-7629 dockerd[2260]: time="2022-04-12T19:11:56.912742939Z" level=info msg="shim disconnected" id=c45a15678f696becfaa07d9cf89bf91a38ae31266ebdbf76ec696e4014283393
Apr 12 19:11:56 functional-20220412120837-7629 dockerd[2260]: time="2022-04-12T19:11:56.912876776Z" level=warning msg="cleaning up after shim disconnected" id=c45a15678f696becfaa07d9cf89bf91a38ae31266ebdbf76ec696e4014283393 namespace=moby
Apr 12 19:11:56 functional-20220412120837-7629 dockerd[2260]: time="2022-04-12T19:11:56.912933823Z" level=info msg="cleaning up dead shim"
Apr 12 19:11:56 functional-20220412120837-7629 dockerd[2260]: time="2022-04-12T19:11:56.922364779Z" level=warning msg="cleanup warnings time=\"2022-04-12T19:11:56Z\" level=info msg=\"starting signal loop\" namespace=moby pid=10059\n"
Apr 12 19:11:58 functional-20220412120837-7629 dockerd[2260]: time="2022-04-12T19:11:58.487737536Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/5dd5078cb97fa637b8cd94dcc386d7889b5ee7a295ac1deed34d6fd6745fe1f0 pid=10226
Apr 12 19:11:58 functional-20220412120837-7629 dockerd[2260]: time="2022-04-12T19:11:58.490461065Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/0756498d9160b778b69c8ee797f11c3b5f6d7a3ab60f6a7a05ac0df3144458ae pid=10233
Apr 12 19:11:59 functional-20220412120837-7629 dockerd[2253]: time="2022-04-12T19:11:59.225903985Z" level=warning msg="reference for unknown type: " digest="sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2" remote="docker.io/kubernetesui/dashboard@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2"
Apr 12 19:12:02 functional-20220412120837-7629 dockerd[2260]: time="2022-04-12T19:12:02.691377823Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/1e90a8416ef09e943687f866f3a51a4fbb6d01495423189a6464226942738f51 pid=10531
Apr 12 19:12:03 functional-20220412120837-7629 dockerd[2260]: time="2022-04-12T19:12:03.218776643Z" level=info msg="shim disconnected" id=1e90a8416ef09e943687f866f3a51a4fbb6d01495423189a6464226942738f51
Apr 12 19:12:03 functional-20220412120837-7629 dockerd[2260]: time="2022-04-12T19:12:03.218927149Z" level=warning msg="cleaning up after shim disconnected" id=1e90a8416ef09e943687f866f3a51a4fbb6d01495423189a6464226942738f51 namespace=moby
Apr 12 19:12:03 functional-20220412120837-7629 dockerd[2260]: time="2022-04-12T19:12:03.219084179Z" level=info msg="cleaning up dead shim"
Apr 12 19:12:03 functional-20220412120837-7629 dockerd[2253]: time="2022-04-12T19:12:03.219369036Z" level=info msg="ignoring event" container=1e90a8416ef09e943687f866f3a51a4fbb6d01495423189a6464226942738f51 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 12 19:12:03 functional-20220412120837-7629 dockerd[2260]: time="2022-04-12T19:12:03.241508811Z" level=warning msg="cleanup warnings time=\"2022-04-12T19:12:03Z\" level=info msg=\"starting signal loop\" namespace=moby pid=10593\n"
Apr 12 19:12:03 functional-20220412120837-7629 dockerd[2253]: time="2022-04-12T19:12:03.425412439Z" level=info msg="Layer sha256:8d988d9cbd4c3812fb85f3c741a359985602af139e727005f4d4471ac42f9d1a cleaned up"
Apr 12 19:12:04 functional-20220412120837-7629 dockerd[2260]: time="2022-04-12T19:12:04.634366237Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/f91c0ecb52c539b92e0c408adcee716b6f8ac91b97012698d3ead83f7fb89837 pid=10687
Apr 12 19:12:04 functional-20220412120837-7629 dockerd[2253]: time="2022-04-12T19:12:04.808118923Z" level=warning msg="reference for unknown type: " digest="sha256:36d5b3f60e1a144cc5ada820910535074bdf5cf73fb70d1ff1681537eef4e172" remote="docker.io/kubernetesui/metrics-scraper@sha256:36d5b3f60e1a144cc5ada820910535074bdf5cf73fb70d1ff1681537eef4e172"
Apr 12 19:12:06 functional-20220412120837-7629 dockerd[2260]: time="2022-04-12T19:12:06.827902756Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/4a0bf53262ddd281a18cff31062757c1ac8f4011d36b8ddef537db3d91534977 pid=10780
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
4a0bf53262ddd kubernetesui/metrics-scraper@sha256:36d5b3f60e1a144cc5ada820910535074bdf5cf73fb70d1ff1681537eef4e172 4 minutes ago Running dashboard-metrics-scraper 0 5dd5078cb97fa
f91c0ecb52c53 kubernetesui/dashboard@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 4 minutes ago Running kubernetes-dashboard 0 0756498d9160b
a4fb6b97ea914 gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e 5 minutes ago Exited mount-munger 0 c45a15678f696
cdcd3da943e50 82e4c8a736a4f 5 minutes ago Running echoserver 0 5e94df6f1f88b
8630fe9149920 nginx@sha256:2275af0f20d71b293916f1958f8497f987b8d8fd8113df54635f2a5915002bf1 5 minutes ago Running myfrontend 0 960ffa5c3c91a
5621bf5789c93 k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969 5 minutes ago Running echoserver 0 32419f1a4007a
5f093802a0638 nginx@sha256:5a0df7fb7c8c03e4158ae9974bfbd6a15da2bdfdeded4fb694367ec812325d31 5 minutes ago Running nginx 0 6fd9ecb0d6fc1
e4c7d28494f43 mysql@sha256:1a73b6a8f507639a8f91ed01ace28965f4f74bb62a9d9b9e7378d5f07fab79dc 5 minutes ago Running mysql 0 8fb9456a2d861
c109859936e2f a4ca41631cc7a 6 minutes ago Running coredns 1 7b0770475f9ce
c1865f2d3a621 3fc1d62d65872 6 minutes ago Running kube-apiserver 1 610705066ec0f
81f4980057496 3fc1d62d65872 6 minutes ago Exited kube-apiserver 0 610705066ec0f
fd88ccc3cd3a4 6e38f40d628db 6 minutes ago Running storage-provisioner 1 467c5e683c575
2974b4132f687 884d49d6d8c9f 6 minutes ago Running kube-scheduler 1 6e2fa4892419c
c307cea21ed9f 25f8c7f3da61c 6 minutes ago Running etcd 1 095a857a3e35a
3ff4299c2db63 b0c9e5e4dbb14 6 minutes ago Running kube-controller-manager 1 3128c7261f446
989d658524b15 3c53fa8541f95 6 minutes ago Running kube-proxy 1 63685f978e2ba
03254aef4e87d 6e38f40d628db 7 minutes ago Exited storage-provisioner 0 5e7fc036cd091
3be86975dc4a8 a4ca41631cc7a 7 minutes ago Exited coredns 0 dae31d3599dba
6b630bf2ec18e 3c53fa8541f95 7 minutes ago Exited kube-proxy 0 01c529833505c
9c2695cfc44fe 884d49d6d8c9f 7 minutes ago Exited kube-scheduler 0 781caa0bc8beb
85aba29c46e7d b0c9e5e4dbb14 7 minutes ago Exited kube-controller-manager 0 630fb8b7efdd5
7337d33ac052b 25f8c7f3da61c 7 minutes ago Exited etcd 0 40e90c84ecfe4
*
* ==> coredns [3be86975dc4a] <==
* [INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
CoreDNS-1.8.6
linux/amd64, go1.17.1, 13a9191
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] Reloading
[INFO] plugin/health: Going into lameduck mode for 5s
[INFO] plugin/reload: Running configuration MD5 = 08e2b174e0f0a30a2e82df9c995f4a34
[INFO] Reloading complete
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
*
* ==> coredns [c109859936e2] <==
* .:53
[INFO] plugin/reload: Running configuration MD5 = 08e2b174e0f0a30a2e82df9c995f4a34
CoreDNS-1.8.6
linux/amd64, go1.17.1, 13a9191
*
* ==> describe nodes <==
* Name: functional-20220412120837-7629
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=functional-20220412120837-7629
kubernetes.io/os=linux
minikube.k8s.io/commit=dcd548d63d1c0dcbdc0ffc0bd37d4379117c142f
minikube.k8s.io/name=functional-20220412120837-7629
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2022_04_12T12_09_10_0700
minikube.k8s.io/version=v1.25.2
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 12 Apr 2022 19:09:07 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: functional-20220412120837-7629
AcquireTime: <unset>
RenewTime: Tue, 12 Apr 2022 19:16:57 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Tue, 12 Apr 2022 19:12:33 +0000 Tue, 12 Apr 2022 19:09:05 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Tue, 12 Apr 2022 19:12:33 +0000 Tue, 12 Apr 2022 19:09:05 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Tue, 12 Apr 2022 19:12:33 +0000 Tue, 12 Apr 2022 19:09:05 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Tue, 12 Apr 2022 19:12:33 +0000 Tue, 12 Apr 2022 19:10:30 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.64.45
Hostname: functional-20220412120837-7629
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 3935172Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 3935172Ki
pods: 110
System Info:
Machine ID: 91c80b31115347fc8281232e9ff2ffa7
System UUID: f52a11ec-0000-0000-997d-149d997cd0f1
Boot ID: d12ba17d-ff0f-4a90-ae9e-63d31289f934
Kernel Version: 4.19.202
OS Image: Buildroot 2021.02.4
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.14
Kubelet Version: v1.23.5
Kube-Proxy Version: v1.23.5
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (14 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default hello-node-54fbb85-fzv5w 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5m13s
default hello-node-connect-74cf8bc446-jxsp7 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5m27s
default mysql-b87c45988-9hgv5 600m (30%!)(MISSING) 700m (35%!)(MISSING) 512Mi (13%!)(MISSING) 700Mi (18%!)(MISSING) 5m56s
default nginx-svc 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5m38s
default sp-pod 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5m14s
kube-system coredns-64897985d-7chzb 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (1%!)(MISSING) 170Mi (4%!)(MISSING) 7m35s
kube-system etcd-functional-20220412120837-7629 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (2%!)(MISSING) 0 (0%!)(MISSING) 7m48s
kube-system kube-apiserver-functional-20220412120837-7629 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m22s
kube-system kube-controller-manager-functional-20220412120837-7629 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 7m48s
kube-system kube-proxy-xtn7s 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 7m36s
kube-system kube-scheduler-functional-20220412120837-7629 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 7m47s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 7m33s
kubernetes-dashboard dashboard-metrics-scraper-58549894f-5jtkn 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5m1s
kubernetes-dashboard kubernetes-dashboard-8469778f77-gzkhw 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5m1s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 1350m (67%!)(MISSING) 700m (35%!)(MISSING)
memory 682Mi (17%!)(MISSING) 870Mi (22%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 7m33s kube-proxy
Normal Starting 6m28s kube-proxy
Normal NodeHasSufficientMemory 7m48s kubelet Node functional-20220412120837-7629 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 7m48s kubelet Node functional-20220412120837-7629 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 7m48s kubelet Node functional-20220412120837-7629 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 7m48s kubelet Updated Node Allocatable limit across pods
Normal Starting 7m48s kubelet Starting kubelet.
Normal NodeReady 7m37s kubelet Node functional-20220412120837-7629 status is now: NodeReady
Normal Starting 6m28s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 6m28s kubelet Node functional-20220412120837-7629 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 6m28s kubelet Node functional-20220412120837-7629 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 6m28s kubelet Node functional-20220412120837-7629 status is now: NodeHasSufficientPID
Normal NodeNotReady 6m28s kubelet Node functional-20220412120837-7629 status is now: NodeNotReady
Normal NodeAllocatableEnforced 6m28s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 6m28s kubelet Node functional-20220412120837-7629 status is now: NodeReady
*
* ==> dmesg <==
* [ +0.243147] systemd-fstab-generator[2405]: Ignoring "noauto" for root device
[ +0.101666] systemd-fstab-generator[2416]: Ignoring "noauto" for root device
[ +0.093540] systemd-fstab-generator[2427]: Ignoring "noauto" for root device
[Apr12 19:09] systemd-fstab-generator[2664]: Ignoring "noauto" for root device
[ +0.572918] kauditd_printk_skb: 107 callbacks suppressed
[ +8.137736] systemd-fstab-generator[3432]: Ignoring "noauto" for root device
[ +13.384240] kauditd_printk_skb: 38 callbacks suppressed
[ +11.290767] kauditd_printk_skb: 62 callbacks suppressed
[Apr12 19:10] kauditd_printk_skb: 5 callbacks suppressed
[ +3.229810] systemd-fstab-generator[4753]: Ignoring "noauto" for root device
[ +0.145792] systemd-fstab-generator[4764]: Ignoring "noauto" for root device
[ +0.137497] systemd-fstab-generator[4775]: Ignoring "noauto" for root device
[ +15.746535] systemd-fstab-generator[5377]: Ignoring "noauto" for root device
[ +0.143531] systemd-fstab-generator[5388]: Ignoring "noauto" for root device
[ +0.141327] systemd-fstab-generator[5399]: Ignoring "noauto" for root device
[ +6.862745] systemd-fstab-generator[6619]: Ignoring "noauto" for root device
[ +18.710935] NFSD: Unable to end grace period: -110
[Apr12 19:11] kauditd_printk_skb: 5 callbacks suppressed
[ +5.085983] kauditd_printk_skb: 8 callbacks suppressed
[ +9.657940] kauditd_printk_skb: 5 callbacks suppressed
[ +9.613695] kauditd_printk_skb: 8 callbacks suppressed
[ +5.540674] kauditd_printk_skb: 2 callbacks suppressed
[ +8.698209] kauditd_printk_skb: 11 callbacks suppressed
[Apr12 19:12] kauditd_printk_skb: 14 callbacks suppressed
[ +5.516719] kauditd_printk_skb: 2 callbacks suppressed
*
* ==> etcd [7337d33ac052] <==
* {"level":"info","ts":"2022-04-12T19:09:05.270Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a0db35bfa35b2080 became pre-candidate at term 1"}
{"level":"info","ts":"2022-04-12T19:09:05.270Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a0db35bfa35b2080 received MsgPreVoteResp from a0db35bfa35b2080 at term 1"}
{"level":"info","ts":"2022-04-12T19:09:05.270Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a0db35bfa35b2080 became candidate at term 2"}
{"level":"info","ts":"2022-04-12T19:09:05.270Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a0db35bfa35b2080 received MsgVoteResp from a0db35bfa35b2080 at term 2"}
{"level":"info","ts":"2022-04-12T19:09:05.270Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a0db35bfa35b2080 became leader at term 2"}
{"level":"info","ts":"2022-04-12T19:09:05.270Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a0db35bfa35b2080 elected leader a0db35bfa35b2080 at term 2"}
{"level":"info","ts":"2022-04-12T19:09:05.270Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2022-04-12T19:09:05.271Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c3a0d17ec8e6c76f","local-member-id":"a0db35bfa35b2080","cluster-version":"3.5"}
{"level":"info","ts":"2022-04-12T19:09:05.272Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2022-04-12T19:09:05.272Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2022-04-12T19:09:05.272Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"a0db35bfa35b2080","local-member-attributes":"{Name:functional-20220412120837-7629 ClientURLs:[https://192.168.64.45:2379]}","request-path":"/0/members/a0db35bfa35b2080/attributes","cluster-id":"c3a0d17ec8e6c76f","publish-timeout":"7s"}
{"level":"info","ts":"2022-04-12T19:09:05.272Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-04-12T19:09:05.272Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
{"level":"info","ts":"2022-04-12T19:09:05.272Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
{"level":"info","ts":"2022-04-12T19:09:05.272Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-04-12T19:09:05.276Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2022-04-12T19:09:05.279Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.64.45:2379"}
{"level":"info","ts":"2022-04-12T19:10:23.813Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2022-04-12T19:10:23.813Z","caller":"embed/etcd.go:367","msg":"closing etcd server","name":"functional-20220412120837-7629","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.64.45:2380"],"advertise-client-urls":["https://192.168.64.45:2379"]}
WARNING: 2022/04/12 19:10:23 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
WARNING: 2022/04/12 19:10:23 [core] grpc: addrConn.createTransport failed to connect to {192.168.64.45:2379 192.168.64.45:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.64.45:2379: connect: connection refused". Reconnecting...
{"level":"info","ts":"2022-04-12T19:10:23.824Z","caller":"etcdserver/server.go:1438","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"a0db35bfa35b2080","current-leader-member-id":"a0db35bfa35b2080"}
{"level":"info","ts":"2022-04-12T19:10:23.825Z","caller":"embed/etcd.go:562","msg":"stopping serving peer traffic","address":"192.168.64.45:2380"}
{"level":"info","ts":"2022-04-12T19:10:23.827Z","caller":"embed/etcd.go:567","msg":"stopped serving peer traffic","address":"192.168.64.45:2380"}
{"level":"info","ts":"2022-04-12T19:10:23.827Z","caller":"embed/etcd.go:369","msg":"closed etcd server","name":"functional-20220412120837-7629","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.64.45:2380"],"advertise-client-urls":["https://192.168.64.45:2379"]}
*
* ==> etcd [c307cea21ed9] <==
* {"level":"info","ts":"2022-04-12T19:10:26.337Z","caller":"etcdserver/server.go:843","msg":"starting etcd server","local-member-id":"a0db35bfa35b2080","local-server-version":"3.5.1","cluster-version":"to_be_decided"}
{"level":"info","ts":"2022-04-12T19:10:26.339Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2022-04-12T19:10:26.340Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"a0db35bfa35b2080","initial-advertise-peer-urls":["https://192.168.64.45:2380"],"listen-peer-urls":["https://192.168.64.45:2380"],"advertise-client-urls":["https://192.168.64.45:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.64.45:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2022-04-12T19:10:26.342Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2022-04-12T19:10:26.342Z","caller":"etcdserver/server.go:744","msg":"starting initial election tick advance","election-ticks":10}
{"level":"info","ts":"2022-04-12T19:10:26.342Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.64.45:2380"}
{"level":"info","ts":"2022-04-12T19:10:26.342Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.64.45:2380"}
{"level":"info","ts":"2022-04-12T19:10:26.343Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a0db35bfa35b2080 switched to configuration voters=(11590917163163787392)"}
{"level":"info","ts":"2022-04-12T19:10:26.343Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"c3a0d17ec8e6c76f","local-member-id":"a0db35bfa35b2080","added-peer-id":"a0db35bfa35b2080","added-peer-peer-urls":["https://192.168.64.45:2380"]}
{"level":"info","ts":"2022-04-12T19:10:26.343Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c3a0d17ec8e6c76f","local-member-id":"a0db35bfa35b2080","cluster-version":"3.5"}
{"level":"info","ts":"2022-04-12T19:10:26.343Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2022-04-12T19:10:27.919Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a0db35bfa35b2080 is starting a new election at term 2"}
{"level":"info","ts":"2022-04-12T19:10:27.919Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a0db35bfa35b2080 became pre-candidate at term 2"}
{"level":"info","ts":"2022-04-12T19:10:27.919Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a0db35bfa35b2080 received MsgPreVoteResp from a0db35bfa35b2080 at term 2"}
{"level":"info","ts":"2022-04-12T19:10:27.919Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a0db35bfa35b2080 became candidate at term 3"}
{"level":"info","ts":"2022-04-12T19:10:27.919Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a0db35bfa35b2080 received MsgVoteResp from a0db35bfa35b2080 at term 3"}
{"level":"info","ts":"2022-04-12T19:10:27.919Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a0db35bfa35b2080 became leader at term 3"}
{"level":"info","ts":"2022-04-12T19:10:27.919Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a0db35bfa35b2080 elected leader a0db35bfa35b2080 at term 3"}
{"level":"info","ts":"2022-04-12T19:10:27.919Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"a0db35bfa35b2080","local-member-attributes":"{Name:functional-20220412120837-7629 ClientURLs:[https://192.168.64.45:2379]}","request-path":"/0/members/a0db35bfa35b2080/attributes","cluster-id":"c3a0d17ec8e6c76f","publish-timeout":"7s"}
{"level":"info","ts":"2022-04-12T19:10:27.920Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-04-12T19:10:27.920Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-04-12T19:10:27.921Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.64.45:2379"}
{"level":"info","ts":"2022-04-12T19:10:27.921Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2022-04-12T19:10:27.924Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
{"level":"info","ts":"2022-04-12T19:10:27.924Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
*
* ==> kernel <==
* 19:16:58 up 8 min, 0 users, load average: 0.14, 0.34, 0.24
Linux functional-20220412120837-7629 4.19.202 #1 SMP Sun Apr 10 08:33:48 UTC 2022 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2021.02.4"
*
* ==> kube-apiserver [81f498005749] <==
* I0412 19:10:31.429722 1 server.go:565] external host was not specified, using 192.168.64.45
I0412 19:10:31.430227 1 server.go:172] Version: v1.23.5
E0412 19:10:31.430485 1 run.go:74] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
*
* ==> kube-apiserver [c1865f2d3a62] <==
* I0412 19:10:36.296174 1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
I0412 19:10:36.296255 1 shared_informer.go:247] Caches are synced for crd-autoregister
I0412 19:10:36.303244 1 cache.go:39] Caches are synced for autoregister controller
I0412 19:10:36.388158 1 shared_informer.go:247] Caches are synced for node_authorizer
I0412 19:10:36.391957 1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller
I0412 19:10:36.399256 1 apf_controller.go:322] Running API Priority and Fairness config worker
I0412 19:10:36.399623 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0412 19:10:37.186561 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0412 19:10:37.186628 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0412 19:10:37.192366 1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
I0412 19:10:40.447691 1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
I0412 19:10:42.108475 1 controller.go:611] quota admission added evaluator for: endpoints
I0412 19:10:42.187348 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0412 19:11:02.749902 1 alloc.go:329] "allocated clusterIPs" service="default/mysql" clusterIPs=map[IPv4:10.96.217.40]
I0412 19:11:02.754610 1 controller.go:611] quota admission added evaluator for: deployments.apps
I0412 19:11:02.766145 1 controller.go:611] quota admission added evaluator for: replicasets.apps
I0412 19:11:20.538508 1 alloc.go:329] "allocated clusterIPs" service="default/nginx-svc" clusterIPs=map[IPv4:10.99.9.177]
I0412 19:11:31.306164 1 alloc.go:329] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs=map[IPv4:10.106.205.248]
I0412 19:11:45.690656 1 alloc.go:329] "allocated clusterIPs" service="default/hello-node" clusterIPs=map[IPv4:10.96.176.9]
I0412 19:11:57.678082 1 controller.go:611] quota admission added evaluator for: namespaces
I0412 19:11:57.691990 1 controller.go:611] quota admission added evaluator for: serviceaccounts
I0412 19:11:57.766956 1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0412 19:11:57.786299 1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0412 19:11:57.884154 1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.100.148.163]
I0412 19:11:57.903930 1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.109.98.98]
*
* ==> kube-controller-manager [3ff4299c2db6] <==
* I0412 19:11:02.786512 1 event.go:294] "Event occurred" object="default/mysql-b87c45988" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mysql-b87c45988-9hgv5"
I0412 19:11:29.697397 1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
I0412 19:11:29.697434 1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
I0412 19:11:31.231320 1 event.go:294] "Event occurred" object="default/hello-node-connect" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-connect-74cf8bc446 to 1"
I0412 19:11:31.241076 1 event.go:294] "Event occurred" object="default/hello-node-connect-74cf8bc446" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-connect-74cf8bc446-jxsp7"
I0412 19:11:45.624426 1 event.go:294] "Event occurred" object="default/hello-node" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-54fbb85 to 1"
I0412 19:11:45.628614 1 event.go:294] "Event occurred" object="default/hello-node-54fbb85" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-54fbb85-fzv5w"
I0412 19:11:57.750834 1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-58549894f to 1"
I0412 19:11:57.760656 1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-58549894f" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-58549894f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
I0412 19:11:57.762089 1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8469778f77 to 1"
E0412 19:11:57.770150 1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-58549894f" failed with pods "dashboard-metrics-scraper-58549894f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0412 19:11:57.770645 1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
E0412 19:11:57.778681 1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
E0412 19:11:57.782203 1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-58549894f" failed with pods "dashboard-metrics-scraper-58549894f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0412 19:11:57.782557 1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-58549894f" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-58549894f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
E0412 19:11:57.788363 1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
E0412 19:11:57.788376 1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-58549894f" failed with pods "dashboard-metrics-scraper-58549894f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0412 19:11:57.788423 1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
I0412 19:11:57.789112 1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-58549894f" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-58549894f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
E0412 19:11:57.794812 1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0412 19:11:57.794984 1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
E0412 19:11:57.796567 1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-58549894f" failed with pods "dashboard-metrics-scraper-58549894f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0412 19:11:57.796714 1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-58549894f" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-58549894f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
I0412 19:11:57.805848 1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8469778f77-gzkhw"
I0412 19:11:57.843552 1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-58549894f" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-58549894f-5jtkn"
*
* ==> kube-controller-manager [85aba29c46e7] <==
* I0412 19:09:22.342649 1 range_allocator.go:374] Set node functional-20220412120837-7629 PodCIDR to [10.244.0.0/24]
I0412 19:09:22.348932 1 shared_informer.go:247] Caches are synced for cronjob
I0412 19:09:22.356422 1 shared_informer.go:247] Caches are synced for persistent volume
I0412 19:09:22.376008 1 shared_informer.go:247] Caches are synced for certificate-csrapproving
I0412 19:09:22.379099 1 shared_informer.go:247] Caches are synced for attach detach
I0412 19:09:22.379506 1 shared_informer.go:247] Caches are synced for stateful set
I0412 19:09:22.421293 1 shared_informer.go:247] Caches are synced for PVC protection
I0412 19:09:22.428402 1 shared_informer.go:247] Caches are synced for ephemeral
I0412 19:09:22.430527 1 shared_informer.go:247] Caches are synced for expand
I0412 19:09:22.431699 1 shared_informer.go:247] Caches are synced for endpoint_slice
I0412 19:09:22.485524 1 shared_informer.go:247] Caches are synced for bootstrap_signer
I0412 19:09:22.530667 1 shared_informer.go:247] Caches are synced for crt configmap
I0412 19:09:22.532220 1 shared_informer.go:247] Caches are synced for resource quota
I0412 19:09:22.533005 1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
I0412 19:09:22.542105 1 shared_informer.go:247] Caches are synced for endpoint
I0412 19:09:22.558551 1 shared_informer.go:247] Caches are synced for resource quota
I0412 19:09:22.569213 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring
I0412 19:09:22.785645 1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-xtn7s"
I0412 19:09:22.973375 1 shared_informer.go:247] Caches are synced for garbage collector
I0412 19:09:22.975757 1 shared_informer.go:247] Caches are synced for garbage collector
I0412 19:09:22.975997 1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0412 19:09:23.281602 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-8gcm4"
I0412 19:09:23.296203 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-7chzb"
I0412 19:09:23.402715 1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
I0412 19:09:23.412169 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-8gcm4"
*
* ==> kube-proxy [6b630bf2ec18] <==
* I0412 19:09:24.835806 1 node.go:163] Successfully retrieved node IP: 192.168.64.45
I0412 19:09:24.835894 1 server_others.go:138] "Detected node IP" address="192.168.64.45"
I0412 19:09:24.835912 1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I0412 19:09:24.873964 1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
I0412 19:09:24.873995 1 server_others.go:206] "Using iptables Proxier"
I0412 19:09:24.874216 1 server.go:656] "Version info" version="v1.23.5"
I0412 19:09:24.874718 1 config.go:226] "Starting endpoint slice config controller"
I0412 19:09:24.874748 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0412 19:09:24.874793 1 config.go:317] "Starting service config controller"
I0412 19:09:24.874797 1 shared_informer.go:240] Waiting for caches to sync for service config
I0412 19:09:24.975775 1 shared_informer.go:247] Caches are synced for service config
I0412 19:09:24.975824 1 shared_informer.go:247] Caches are synced for endpoint slice config
*
* ==> kube-proxy [989d658524b1] <==
* I0412 19:10:29.673723 1 node.go:163] Successfully retrieved node IP: 192.168.64.45
I0412 19:10:29.677481 1 server_others.go:138] "Detected node IP" address="192.168.64.45"
I0412 19:10:29.677682 1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I0412 19:10:29.753984 1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
I0412 19:10:29.754014 1 server_others.go:206] "Using iptables Proxier"
I0412 19:10:29.754255 1 server.go:656] "Version info" version="v1.23.5"
I0412 19:10:29.754770 1 config.go:226] "Starting endpoint slice config controller"
I0412 19:10:29.754797 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0412 19:10:29.754833 1 config.go:317] "Starting service config controller"
I0412 19:10:29.754855 1 shared_informer.go:240] Waiting for caches to sync for service config
I0412 19:10:29.856222 1 shared_informer.go:247] Caches are synced for service config
I0412 19:10:29.856269 1 shared_informer.go:247] Caches are synced for endpoint slice config
*
* ==> kube-scheduler [2974b4132f68] <==
* I0412 19:10:26.687474 1 serving.go:348] Generated self-signed cert in-memory
W0412 19:10:29.573301 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0412 19:10:29.573382 1 authentication.go:345] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0412 19:10:29.573514 1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous.
W0412 19:10:29.573578 1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0412 19:10:29.630957 1 server.go:139] "Starting Kubernetes Scheduler" version="v1.23.5"
I0412 19:10:29.632194 1 secure_serving.go:200] Serving securely on 127.0.0.1:10259
I0412 19:10:29.632414 1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0412 19:10:29.632472 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0412 19:10:29.632559 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0412 19:10:29.733080 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
E0412 19:10:36.230437 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)
E0412 19:10:36.230529 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)
E0412 19:10:36.230930 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)
E0412 19:10:36.231069 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)
E0412 19:10:36.231105 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)
E0412 19:10:36.231198 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)
E0412 19:10:36.231232 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: unknown (get namespaces)
E0412 19:10:36.231245 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)
E0412 19:10:36.231259 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)
E0412 19:10:36.231313 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)
E0412 19:10:36.231330 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: unknown (get pods)
E0412 19:10:36.231346 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)
E0412 19:10:36.231363 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: unknown (get nodes)
E0412 19:10:36.243915 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: unknown (get configmaps)
*
* ==> kube-scheduler [9c2695cfc44f] <==
* W0412 19:09:07.430659 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0412 19:09:07.431765 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
W0412 19:09:08.266017 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0412 19:09:08.266098 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
W0412 19:09:08.302834 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0412 19:09:08.302894 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
W0412 19:09:08.327182 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0412 19:09:08.327260 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
W0412 19:09:08.367739 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0412 19:09:08.367821 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
W0412 19:09:08.379077 1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0412 19:09:08.379152 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
W0412 19:09:08.390103 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0412 19:09:08.390217 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
W0412 19:09:08.445167 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0412 19:09:08.445260 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
W0412 19:09:08.461846 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0412 19:09:08.461950 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
W0412 19:09:08.487571 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0412 19:09:08.487697 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0412 19:09:10.139697 1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
I0412 19:09:10.609155 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0412 19:10:23.730142 1 secure_serving.go:311] Stopped listening on 127.0.0.1:10259
I0412 19:10:23.730161 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
I0412 19:10:23.730487 1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
*
* ==> kubelet <==
* -- Journal begins at Tue 2022-04-12 19:08:45 UTC, ends at Tue 2022-04-12 19:16:59 UTC. --
Apr 12 19:11:55 functional-20220412120837-7629 kubelet[6648]: I0412 19:11:55.856533 6648 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/busybox-mount through plugin: invalid network status for"
Apr 12 19:11:56 functional-20220412120837-7629 kubelet[6648]: I0412 19:11:56.865784 6648 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/busybox-mount through plugin: invalid network status for"
Apr 12 19:11:56 functional-20220412120837-7629 kubelet[6648]: I0412 19:11:56.868409 6648 scope.go:110] "RemoveContainer" containerID="a4fb6b97ea91481193e0d0658eb5af7580c3f1891b694b503e177fe20dca98fa"
Apr 12 19:11:57 functional-20220412120837-7629 kubelet[6648]: I0412 19:11:57.811832 6648 topology_manager.go:200] "Topology Admit Handler"
Apr 12 19:11:57 functional-20220412120837-7629 kubelet[6648]: I0412 19:11:57.848131 6648 topology_manager.go:200] "Topology Admit Handler"
Apr 12 19:11:57 functional-20220412120837-7629 kubelet[6648]: I0412 19:11:57.863440 6648 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss8jf\" (UniqueName: \"kubernetes.io/projected/ea142c3f-7538-490f-bca0-e26b351e193a-kube-api-access-ss8jf\") pod \"kubernetes-dashboard-8469778f77-gzkhw\" (UID: \"ea142c3f-7538-490f-bca0-e26b351e193a\") " pod="kubernetes-dashboard/kubernetes-dashboard-8469778f77-gzkhw"
Apr 12 19:11:57 functional-20220412120837-7629 kubelet[6648]: I0412 19:11:57.863608 6648 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ea142c3f-7538-490f-bca0-e26b351e193a-tmp-volume\") pod \"kubernetes-dashboard-8469778f77-gzkhw\" (UID: \"ea142c3f-7538-490f-bca0-e26b351e193a\") " pod="kubernetes-dashboard/kubernetes-dashboard-8469778f77-gzkhw"
Apr 12 19:11:57 functional-20220412120837-7629 kubelet[6648]: I0412 19:11:57.879776 6648 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="c45a15678f696becfaa07d9cf89bf91a38ae31266ebdbf76ec696e4014283393"
Apr 12 19:11:57 functional-20220412120837-7629 kubelet[6648]: I0412 19:11:57.964207 6648 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hsrt\" (UniqueName: \"kubernetes.io/projected/08ee722b-a5fd-4751-9220-a95ea0dfef04-kube-api-access-7hsrt\") pod \"dashboard-metrics-scraper-58549894f-5jtkn\" (UID: \"08ee722b-a5fd-4751-9220-a95ea0dfef04\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-58549894f-5jtkn"
Apr 12 19:11:57 functional-20220412120837-7629 kubelet[6648]: I0412 19:11:57.964346 6648 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/08ee722b-a5fd-4751-9220-a95ea0dfef04-tmp-volume\") pod \"dashboard-metrics-scraper-58549894f-5jtkn\" (UID: \"08ee722b-a5fd-4751-9220-a95ea0dfef04\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-58549894f-5jtkn"
Apr 12 19:11:58 functional-20220412120837-7629 kubelet[6648]: I0412 19:11:58.949778 6648 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-8469778f77-gzkhw through plugin: invalid network status for"
Apr 12 19:11:58 functional-20220412120837-7629 kubelet[6648]: I0412 19:11:58.950145 6648 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="0756498d9160b778b69c8ee797f11c3b5f6d7a3ab60f6a7a05ac0df3144458ae"
Apr 12 19:11:58 functional-20220412120837-7629 kubelet[6648]: I0412 19:11:58.994697 6648 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="5dd5078cb97fa637b8cd94dcc386d7889b5ee7a295ac1deed34d6fd6745fe1f0"
Apr 12 19:11:58 functional-20220412120837-7629 kubelet[6648]: I0412 19:11:58.996447 6648 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-58549894f-5jtkn through plugin: invalid network status for"
Apr 12 19:11:59 functional-20220412120837-7629 kubelet[6648]: I0412 19:11:59.080966 6648 reconciler.go:192] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/262e1edc-896d-4ee2-a092-53fe2b06f7a5-test-volume\") pod \"262e1edc-896d-4ee2-a092-53fe2b06f7a5\" (UID: \"262e1edc-896d-4ee2-a092-53fe2b06f7a5\") "
Apr 12 19:11:59 functional-20220412120837-7629 kubelet[6648]: I0412 19:11:59.081488 6648 reconciler.go:192] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcdz2\" (UniqueName: \"kubernetes.io/projected/262e1edc-896d-4ee2-a092-53fe2b06f7a5-kube-api-access-fcdz2\") pod \"262e1edc-896d-4ee2-a092-53fe2b06f7a5\" (UID: \"262e1edc-896d-4ee2-a092-53fe2b06f7a5\") "
Apr 12 19:11:59 functional-20220412120837-7629 kubelet[6648]: I0412 19:11:59.081361 6648 operation_generator.go:910] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/262e1edc-896d-4ee2-a092-53fe2b06f7a5-test-volume" (OuterVolumeSpecName: "test-volume") pod "262e1edc-896d-4ee2-a092-53fe2b06f7a5" (UID: "262e1edc-896d-4ee2-a092-53fe2b06f7a5"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Apr 12 19:11:59 functional-20220412120837-7629 kubelet[6648]: I0412 19:11:59.093305 6648 operation_generator.go:910] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/262e1edc-896d-4ee2-a092-53fe2b06f7a5-kube-api-access-fcdz2" (OuterVolumeSpecName: "kube-api-access-fcdz2") pod "262e1edc-896d-4ee2-a092-53fe2b06f7a5" (UID: "262e1edc-896d-4ee2-a092-53fe2b06f7a5"). InnerVolumeSpecName "kube-api-access-fcdz2". PluginName "kubernetes.io/projected", VolumeGidValue ""
Apr 12 19:11:59 functional-20220412120837-7629 kubelet[6648]: I0412 19:11:59.183150 6648 reconciler.go:300] "Volume detached for volume \"kube-api-access-fcdz2\" (UniqueName: \"kubernetes.io/projected/262e1edc-896d-4ee2-a092-53fe2b06f7a5-kube-api-access-fcdz2\") on node \"functional-20220412120837-7629\" DevicePath \"\""
Apr 12 19:11:59 functional-20220412120837-7629 kubelet[6648]: I0412 19:11:59.183228 6648 reconciler.go:300] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/262e1edc-896d-4ee2-a092-53fe2b06f7a5-test-volume\") on node \"functional-20220412120837-7629\" DevicePath \"\""
Apr 12 19:12:00 functional-20220412120837-7629 kubelet[6648]: I0412 19:12:00.003290 6648 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-58549894f-5jtkn through plugin: invalid network status for"
Apr 12 19:12:00 functional-20220412120837-7629 kubelet[6648]: I0412 19:12:00.005502 6648 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-8469778f77-gzkhw through plugin: invalid network status for"
Apr 12 19:12:05 functional-20220412120837-7629 kubelet[6648]: I0412 19:12:05.058726 6648 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-8469778f77-gzkhw through plugin: invalid network status for"
Apr 12 19:12:07 functional-20220412120837-7629 kubelet[6648]: I0412 19:12:07.081186 6648 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-58549894f-5jtkn through plugin: invalid network status for"
Apr 12 19:12:08 functional-20220412120837-7629 kubelet[6648]: I0412 19:12:08.116271 6648 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-58549894f-5jtkn through plugin: invalid network status for"
*
* ==> kubernetes-dashboard [f91c0ecb52c5] <==
* 2022/04/12 19:12:04 Using namespace: kubernetes-dashboard
2022/04/12 19:12:04 Using in-cluster config to connect to apiserver
2022/04/12 19:12:04 Using secret token for csrf signing
2022/04/12 19:12:04 Initializing csrf token from kubernetes-dashboard-csrf secret
2022/04/12 19:12:04 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
2022/04/12 19:12:04 Successful initial request to the apiserver, version: v1.23.5
2022/04/12 19:12:04 Generating JWE encryption key
2022/04/12 19:12:04 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
2022/04/12 19:12:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
2022/04/12 19:12:05 Initializing JWE encryption key from synchronized object
2022/04/12 19:12:05 Creating in-cluster Sidecar client
2022/04/12 19:12:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2022/04/12 19:12:05 Serving insecurely on HTTP port: 9090
2022/04/12 19:12:35 Successful request to sidecar
2022/04/12 19:12:04 Starting overwatch
*
* ==> storage-provisioner [03254aef4e87] <==
* I0412 19:09:26.136943 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0412 19:09:26.144551 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0412 19:09:26.144623 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0412 19:09:26.149853 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0412 19:09:26.149999 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-20220412120837-7629_416f816e-7077-4f73-8b6e-19778db93e8d!
I0412 19:09:26.150222 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"367b35c6-3499-49a1-aebb-c8c40c92f44d", APIVersion:"v1", ResourceVersion:"474", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-20220412120837-7629_416f816e-7077-4f73-8b6e-19778db93e8d became leader
I0412 19:09:26.250239 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-20220412120837-7629_416f816e-7077-4f73-8b6e-19778db93e8d!
*
* ==> storage-provisioner [fd88ccc3cd3a] <==
* I0412 19:10:26.926027 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0412 19:10:29.706714 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0412 19:10:29.706761 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
E0412 19:10:33.175700 1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
I0412 19:10:47.126326 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0412 19:10:47.126644 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-20220412120837-7629_8f3f2770-ca7c-4e00-bb2f-a51b8a4b7f98!
I0412 19:10:47.127802 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"367b35c6-3499-49a1-aebb-c8c40c92f44d", APIVersion:"v1", ResourceVersion:"589", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-20220412120837-7629_8f3f2770-ca7c-4e00-bb2f-a51b8a4b7f98 became leader
I0412 19:10:47.227250 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-20220412120837-7629_8f3f2770-ca7c-4e00-bb2f-a51b8a4b7f98!
I0412 19:11:29.697666 1 controller.go:1332] provision "default/myclaim" class "standard": started
I0412 19:11:29.697846 1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard d21c3109-8af0-4f6b-89d7-922eb2169ddc 452 0 2022-04-12 19:09:25 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
storageclass.kubernetes.io/is-default-class:true] [] [] [{kubectl-client-side-apply Update storage.k8s.io/v1 2022-04-12 19:09:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-067523c6-1def-4de3-8329-64cfee2c8ebf &PersistentVolumeClaim{ObjectMeta:{myclaim default 067523c6-1def-4de3-8329-64cfee2c8ebf 668 0 2022-04-12 19:11:29 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection] [{kube-controller-manager Update v1 2022-04-12 19:11:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2022-04-12 19:11:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
I0412 19:11:29.698403 1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-067523c6-1def-4de3-8329-64cfee2c8ebf" provisioned
I0412 19:11:29.698534 1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
I0412 19:11:29.698559 1 volume_store.go:212] Trying to save persistentvolume "pvc-067523c6-1def-4de3-8329-64cfee2c8ebf"
I0412 19:11:29.699603 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"067523c6-1def-4de3-8329-64cfee2c8ebf", APIVersion:"v1", ResourceVersion:"668", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
I0412 19:11:29.729324 1 volume_store.go:219] persistentvolume "pvc-067523c6-1def-4de3-8329-64cfee2c8ebf" saved
I0412 19:11:29.729650 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"067523c6-1def-4de3-8329-64cfee2c8ebf", APIVersion:"v1", ResourceVersion:"668", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-067523c6-1def-4de3-8329-64cfee2c8ebf
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-darwin-amd64 status --format={{.APIServer}} -p functional-20220412120837-7629 -n functional-20220412120837-7629
helpers_test.go:261: (dbg) Run: kubectl --context functional-20220412120837-7629 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: busybox-mount
helpers_test.go:272: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:275: (dbg) Run: kubectl --context functional-20220412120837-7629 describe pod busybox-mount
helpers_test.go:280: (dbg) kubectl --context functional-20220412120837-7629 describe pod busybox-mount:
-- stdout --
Name: busybox-mount
Namespace: default
Priority: 0
Node: functional-20220412120837-7629/192.168.64.45
Start Time: Tue, 12 Apr 2022 12:11:54 -0700
Labels: integration-test=busybox-mount
Annotations: <none>
Status: Succeeded
IP: 172.17.0.8
IPs:
IP: 172.17.0.8
Containers:
mount-munger:
Container ID: docker://a4fb6b97ea91481193e0d0658eb5af7580c3f1891b694b503e177fe20dca98fa
Image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID: docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
Port: <none>
Host Port: <none>
Command:
/bin/sh
-c
--
Args:
cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 12 Apr 2022 12:11:56 -0700
Finished: Tue, 12 Apr 2022 12:11:56 -0700
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/mount-9p from test-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fcdz2 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
test-volume:
Type: HostPath (bare host directory volume)
Path: /mount-9p
HostPathType:
kube-api-access-fcdz2:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m5s default-scheduler Successfully assigned default/busybox-mount to functional-20220412120837-7629
Normal Pulling 5m5s kubelet Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Normal Pulled 5m3s kubelet Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.193489441s
Normal Created 5m3s kubelet Created container mount-munger
Normal Started 5m3s kubelet Started container mount-munger
-- /stdout --
helpers_test.go:283: <<< TestFunctional/parallel/DashboardCmd FAILED: end of post-mortem logs <<<
helpers_test.go:284: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/DashboardCmd (303.14s)