=== RUN TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd
=== CONT TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-645838 --alsologtostderr -v=1]
functional_test.go:914: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-645838 --alsologtostderr -v=1] ...
functional_test.go:906: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-645838 --alsologtostderr -v=1] stdout:
functional_test.go:906: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-645838 --alsologtostderr -v=1] stderr:
I0706 18:07:45.233982 23079 out.go:296] Setting OutFile to fd 1 ...
I0706 18:07:45.234160 23079 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0706 18:07:45.234172 23079 out.go:309] Setting ErrFile to fd 2...
I0706 18:07:45.234179 23079 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0706 18:07:45.234348 23079 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15452-9102/.minikube/bin
I0706 18:07:45.234663 23079 mustload.go:65] Loading cluster: functional-645838
I0706 18:07:45.235117 23079 config.go:182] Loaded profile config "functional-645838": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
I0706 18:07:45.235633 23079 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0706 18:07:45.235696 23079 main.go:141] libmachine: Launching plugin server for driver kvm2
I0706 18:07:45.249799 23079 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42725
I0706 18:07:45.250228 23079 main.go:141] libmachine: () Calling .GetVersion
I0706 18:07:45.250788 23079 main.go:141] libmachine: Using API Version 1
I0706 18:07:45.250811 23079 main.go:141] libmachine: () Calling .SetConfigRaw
I0706 18:07:45.251223 23079 main.go:141] libmachine: () Calling .GetMachineName
I0706 18:07:45.251419 23079 main.go:141] libmachine: (functional-645838) Calling .GetState
I0706 18:07:45.252883 23079 host.go:66] Checking if "functional-645838" exists ...
I0706 18:07:45.253414 23079 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0706 18:07:45.253462 23079 main.go:141] libmachine: Launching plugin server for driver kvm2
I0706 18:07:45.267768 23079 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33543
I0706 18:07:45.268152 23079 main.go:141] libmachine: () Calling .GetVersion
I0706 18:07:45.268666 23079 main.go:141] libmachine: Using API Version 1
I0706 18:07:45.268709 23079 main.go:141] libmachine: () Calling .SetConfigRaw
I0706 18:07:45.269073 23079 main.go:141] libmachine: () Calling .GetMachineName
I0706 18:07:45.269263 23079 main.go:141] libmachine: (functional-645838) Calling .DriverName
I0706 18:07:45.269433 23079 api_server.go:166] Checking apiserver status ...
I0706 18:07:45.269486 23079 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0706 18:07:45.269509 23079 main.go:141] libmachine: (functional-645838) Calling .GetSSHHostname
I0706 18:07:45.272508 23079 main.go:141] libmachine: (functional-645838) DBG | domain functional-645838 has defined MAC address 52:54:00:24:a6:07 in network mk-functional-645838
I0706 18:07:45.272973 23079 main.go:141] libmachine: (functional-645838) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:a6:07", ip: ""} in network mk-functional-645838: {Iface:virbr1 ExpiryTime:2023-07-06 19:05:25 +0000 UTC Type:0 Mac:52:54:00:24:a6:07 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:functional-645838 Clientid:01:52:54:00:24:a6:07}
I0706 18:07:45.273012 23079 main.go:141] libmachine: (functional-645838) DBG | domain functional-645838 has defined IP address 192.168.39.124 and MAC address 52:54:00:24:a6:07 in network mk-functional-645838
I0706 18:07:45.273160 23079 main.go:141] libmachine: (functional-645838) Calling .GetSSHPort
I0706 18:07:45.273360 23079 main.go:141] libmachine: (functional-645838) Calling .GetSSHKeyPath
I0706 18:07:45.273674 23079 main.go:141] libmachine: (functional-645838) Calling .GetSSHUsername
I0706 18:07:45.273845 23079 sshutil.go:53] new ssh client: &{IP:192.168.39.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15452-9102/.minikube/machines/functional-645838/id_rsa Username:docker}
I0706 18:07:45.400579 23079 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/3695/cgroup
I0706 18:07:45.429441 23079 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod096a2ed633aabf0d8c5bc41c15976507/4b58563dd73b3d3ed635dec05162e0fcfad16c56421b66ff020e69b7bbd0d914"
I0706 18:07:45.429505 23079 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod096a2ed633aabf0d8c5bc41c15976507/4b58563dd73b3d3ed635dec05162e0fcfad16c56421b66ff020e69b7bbd0d914/freezer.state
I0706 18:07:45.461435 23079 api_server.go:204] freezer state: "THAWED"
I0706 18:07:45.461455 23079 api_server.go:253] Checking apiserver healthz at https://192.168.39.124:8441/healthz ...
I0706 18:07:45.470520 23079 api_server.go:279] https://192.168.39.124:8441/healthz returned 200:
ok
W0706 18:07:45.470562 23079 out.go:239] * Enabling dashboard ...
* Enabling dashboard ...
I0706 18:07:45.470695 23079 config.go:182] Loaded profile config "functional-645838": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
I0706 18:07:45.470706 23079 addons.go:66] Setting dashboard=true in profile "functional-645838"
I0706 18:07:45.470715 23079 addons.go:228] Setting addon dashboard=true in "functional-645838"
I0706 18:07:45.470796 23079 host.go:66] Checking if "functional-645838" exists ...
I0706 18:07:45.471064 23079 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0706 18:07:45.471094 23079 main.go:141] libmachine: Launching plugin server for driver kvm2
I0706 18:07:45.486757 23079 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42525
I0706 18:07:45.487142 23079 main.go:141] libmachine: () Calling .GetVersion
I0706 18:07:45.487616 23079 main.go:141] libmachine: Using API Version 1
I0706 18:07:45.487639 23079 main.go:141] libmachine: () Calling .SetConfigRaw
I0706 18:07:45.487943 23079 main.go:141] libmachine: () Calling .GetMachineName
I0706 18:07:45.488519 23079 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0706 18:07:45.488567 23079 main.go:141] libmachine: Launching plugin server for driver kvm2
I0706 18:07:45.506960 23079 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36999
I0706 18:07:45.507251 23079 main.go:141] libmachine: () Calling .GetVersion
I0706 18:07:45.507656 23079 main.go:141] libmachine: Using API Version 1
I0706 18:07:45.507668 23079 main.go:141] libmachine: () Calling .SetConfigRaw
I0706 18:07:45.507944 23079 main.go:141] libmachine: () Calling .GetMachineName
I0706 18:07:45.508048 23079 main.go:141] libmachine: (functional-645838) Calling .GetState
I0706 18:07:45.509643 23079 main.go:141] libmachine: (functional-645838) Calling .DriverName
I0706 18:07:45.512125 23079 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0706 18:07:45.513780 23079 out.go:177] - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I0706 18:07:45.515318 23079 addons.go:420] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0706 18:07:45.515335 23079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0706 18:07:45.515351 23079 main.go:141] libmachine: (functional-645838) Calling .GetSSHHostname
I0706 18:07:45.518785 23079 main.go:141] libmachine: (functional-645838) DBG | domain functional-645838 has defined MAC address 52:54:00:24:a6:07 in network mk-functional-645838
I0706 18:07:45.519196 23079 main.go:141] libmachine: (functional-645838) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:a6:07", ip: ""} in network mk-functional-645838: {Iface:virbr1 ExpiryTime:2023-07-06 19:05:25 +0000 UTC Type:0 Mac:52:54:00:24:a6:07 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:functional-645838 Clientid:01:52:54:00:24:a6:07}
I0706 18:07:45.519444 23079 main.go:141] libmachine: (functional-645838) DBG | domain functional-645838 has defined IP address 192.168.39.124 and MAC address 52:54:00:24:a6:07 in network mk-functional-645838
I0706 18:07:45.519466 23079 main.go:141] libmachine: (functional-645838) Calling .GetSSHPort
I0706 18:07:45.519634 23079 main.go:141] libmachine: (functional-645838) Calling .GetSSHKeyPath
I0706 18:07:45.519785 23079 main.go:141] libmachine: (functional-645838) Calling .GetSSHUsername
I0706 18:07:45.519912 23079 sshutil.go:53] new ssh client: &{IP:192.168.39.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15452-9102/.minikube/machines/functional-645838/id_rsa Username:docker}
I0706 18:07:45.718745 23079 addons.go:420] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0706 18:07:45.718771 23079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0706 18:07:45.775444 23079 addons.go:420] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0706 18:07:45.775467 23079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0706 18:07:45.805837 23079 addons.go:420] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0706 18:07:45.805858 23079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0706 18:07:45.824947 23079 addons.go:420] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0706 18:07:45.824975 23079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I0706 18:07:45.847948 23079 addons.go:420] installing /etc/kubernetes/addons/dashboard-role.yaml
I0706 18:07:45.847973 23079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0706 18:07:45.871919 23079 addons.go:420] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0706 18:07:45.871942 23079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0706 18:07:45.891894 23079 addons.go:420] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0706 18:07:45.891918 23079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0706 18:07:45.919279 23079 addons.go:420] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0706 18:07:45.919303 23079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0706 18:07:45.957800 23079 addons.go:420] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0706 18:07:45.957827 23079 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0706 18:07:45.986433 23079 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0706 18:07:47.532832 23079 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.546332707s)
I0706 18:07:47.532892 23079 main.go:141] libmachine: Making call to close driver server
I0706 18:07:47.532905 23079 main.go:141] libmachine: (functional-645838) Calling .Close
I0706 18:07:47.533221 23079 main.go:141] libmachine: Successfully made call to close driver server
I0706 18:07:47.533241 23079 main.go:141] libmachine: Making call to close connection to plugin binary
I0706 18:07:47.533252 23079 main.go:141] libmachine: Making call to close driver server
I0706 18:07:47.533262 23079 main.go:141] libmachine: (functional-645838) Calling .Close
I0706 18:07:47.533499 23079 main.go:141] libmachine: Successfully made call to close driver server
I0706 18:07:47.533517 23079 main.go:141] libmachine: Making call to close connection to plugin binary
I0706 18:07:47.535236 23079 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p functional-645838 addons enable metrics-server
I0706 18:07:47.536672 23079 addons.go:191] Writing out "functional-645838" config to set dashboard=true...
W0706 18:07:47.536986 23079 out.go:239] * Verifying dashboard health ...
* Verifying dashboard health ...
I0706 18:07:47.539757 23079 kapi.go:59] client config for functional-645838: &rest.Config{Host:"https://192.168.39.124:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15452-9102/.minikube/profiles/functional-645838/client.crt", KeyFile:"/home/jenkins/minikube-integration/15452-9102/.minikube/profiles/functional-645838/client.key", CAFile:"/home/jenkins/minikube-integration/15452-9102/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2a40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0706 18:07:47.556947 23079 service.go:214] Found service: &Service{ObjectMeta:{kubernetes-dashboard kubernetes-dashboard c1c9cdb5-e5e6-4c05-848b-d9cbc889bcba 646 0 2023-07-06 18:07:47 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2023-07-06 18:07:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.99.81.43,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.99.81.43],IPFamilies:[IPv4],AllocateLoadBalancerNod
ePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W0706 18:07:47.557070 23079 out.go:239] * Launching proxy ...
* Launching proxy ...
I0706 18:07:47.557125 23079 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-645838 proxy --port 36195]
I0706 18:07:47.557405 23079 dashboard.go:157] Waiting for kubectl to output host:port ...
I0706 18:07:47.600883 23079 out.go:177]
W0706 18:07:47.602297 23079 out.go:239] X Exiting due to HOST_KUBECTL_PROXY: kubectl proxy: readByteWithTimeout: EOF
X Exiting due to HOST_KUBECTL_PROXY: kubectl proxy: readByteWithTimeout: EOF
W0706 18:07:47.602317 23079 out.go:239] *
*
W0706 18:07:47.604986 23079 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ * Please also attach the following file to the GitHub issue: │
│ * - /tmp/minikube_dashboard_2f9e80c8c4dc47927ad6915561a20c5705c3b3b4_0.log │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ * Please also attach the following file to the GitHub issue: │
│ * - /tmp/minikube_dashboard_2f9e80c8c4dc47927ad6915561a20c5705c3b3b4_0.log │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0706 18:07:47.606385 23079 out.go:177]
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p functional-645838 -n functional-645838
helpers_test.go:244: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p functional-645838 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-645838 logs -n 25: (1.856301003s)
helpers_test.go:252: TestFunctional/parallel/DashboardCmd logs:
-- stdout --
*
* ==> Audit <==
* |-----------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|-----------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
| ssh | functional-645838 ssh | functional-645838 | jenkins | v1.30.1 | 06 Jul 23 18:06 UTC | 06 Jul 23 18:06 UTC |
| | sudo crictl inspecti | | | | | |
| | registry.k8s.io/pause:latest | | | | | |
| cache | delete | minikube | jenkins | v1.30.1 | 06 Jul 23 18:06 UTC | 06 Jul 23 18:06 UTC |
| | registry.k8s.io/pause:3.1 | | | | | |
| cache | delete | minikube | jenkins | v1.30.1 | 06 Jul 23 18:06 UTC | 06 Jul 23 18:06 UTC |
| | registry.k8s.io/pause:latest | | | | | |
| kubectl | functional-645838 kubectl -- | functional-645838 | jenkins | v1.30.1 | 06 Jul 23 18:06 UTC | 06 Jul 23 18:06 UTC |
| | --context functional-645838 | | | | | |
| | get pods | | | | | |
| start | -p functional-645838 | functional-645838 | jenkins | v1.30.1 | 06 Jul 23 18:06 UTC | 06 Jul 23 18:07 UTC |
| | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision | | | | | |
| | --wait=all | | | | | |
| service | invalid-svc -p | functional-645838 | jenkins | v1.30.1 | 06 Jul 23 18:07 UTC | |
| | functional-645838 | | | | | |
| config | functional-645838 config unset | functional-645838 | jenkins | v1.30.1 | 06 Jul 23 18:07 UTC | 06 Jul 23 18:07 UTC |
| | cpus | | | | | |
| cp | functional-645838 cp | functional-645838 | jenkins | v1.30.1 | 06 Jul 23 18:07 UTC | 06 Jul 23 18:07 UTC |
| | testdata/cp-test.txt | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| config | functional-645838 config get | functional-645838 | jenkins | v1.30.1 | 06 Jul 23 18:07 UTC | |
| | cpus | | | | | |
| config | functional-645838 config set | functional-645838 | jenkins | v1.30.1 | 06 Jul 23 18:07 UTC | 06 Jul 23 18:07 UTC |
| | cpus 2 | | | | | |
| config | functional-645838 config get | functional-645838 | jenkins | v1.30.1 | 06 Jul 23 18:07 UTC | 06 Jul 23 18:07 UTC |
| | cpus | | | | | |
| config | functional-645838 config unset | functional-645838 | jenkins | v1.30.1 | 06 Jul 23 18:07 UTC | 06 Jul 23 18:07 UTC |
| | cpus | | | | | |
| config | functional-645838 config get | functional-645838 | jenkins | v1.30.1 | 06 Jul 23 18:07 UTC | |
| | cpus | | | | | |
| ssh | functional-645838 ssh -n | functional-645838 | jenkins | v1.30.1 | 06 Jul 23 18:07 UTC | 06 Jul 23 18:07 UTC |
| | functional-645838 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| mount | -p functional-645838 | functional-645838 | jenkins | v1.30.1 | 06 Jul 23 18:07 UTC | |
| | /tmp/TestFunctionalparallelMountCmdany-port1627627716/001:/mount-9p | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| ssh | functional-645838 ssh findmnt | functional-645838 | jenkins | v1.30.1 | 06 Jul 23 18:07 UTC | |
| | -T /mount-9p | grep 9p | | | | | |
| cp | functional-645838 cp | functional-645838 | jenkins | v1.30.1 | 06 Jul 23 18:07 UTC | 06 Jul 23 18:07 UTC |
| | functional-645838:/home/docker/cp-test.txt | | | | | |
| | /tmp/TestFunctionalparallelCpCmd3331492407/001/cp-test.txt | | | | | |
| ssh | functional-645838 ssh -n | functional-645838 | jenkins | v1.30.1 | 06 Jul 23 18:07 UTC | 06 Jul 23 18:07 UTC |
| | functional-645838 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | functional-645838 ssh findmnt | functional-645838 | jenkins | v1.30.1 | 06 Jul 23 18:07 UTC | 06 Jul 23 18:07 UTC |
| | -T /mount-9p | grep 9p | | | | | |
| start | -p functional-645838 | functional-645838 | jenkins | v1.30.1 | 06 Jul 23 18:07 UTC | |
| | --dry-run --memory | | | | | |
| | 250MB --alsologtostderr | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| start | -p functional-645838 | functional-645838 | jenkins | v1.30.1 | 06 Jul 23 18:07 UTC | |
| | --dry-run --memory | | | | | |
| | 250MB --alsologtostderr | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | functional-645838 ssh -- ls | functional-645838 | jenkins | v1.30.1 | 06 Jul 23 18:07 UTC | 06 Jul 23 18:07 UTC |
| | -la /mount-9p | | | | | |
| start | -p functional-645838 | functional-645838 | jenkins | v1.30.1 | 06 Jul 23 18:07 UTC | |
| | --dry-run --alsologtostderr | | | | | |
| | -v=1 --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| dashboard | --url --port 36195 | functional-645838 | jenkins | v1.30.1 | 06 Jul 23 18:07 UTC | |
| | -p functional-645838 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| ssh | functional-645838 ssh cat | functional-645838 | jenkins | v1.30.1 | 06 Jul 23 18:07 UTC | 06 Jul 23 18:07 UTC |
| | /mount-9p/test-1688666864074244477 | | | | | |
|-----------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2023/07/06 18:07:45
Running on machine: ubuntu-20-agent-5
Binary: Built with gc go1.20.5 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0706 18:07:45.094646 23028 out.go:296] Setting OutFile to fd 1 ...
I0706 18:07:45.094762 23028 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0706 18:07:45.094770 23028 out.go:309] Setting ErrFile to fd 2...
I0706 18:07:45.094774 23028 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0706 18:07:45.094879 23028 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15452-9102/.minikube/bin
I0706 18:07:45.095351 23028 out.go:303] Setting JSON to false
I0706 18:07:45.096189 23028 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":3012,"bootTime":1688663853,"procs":232,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0706 18:07:45.096245 23028 start.go:137] virtualization: kvm guest
I0706 18:07:45.098402 23028 out.go:177] * [functional-645838] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
I0706 18:07:45.100135 23028 out.go:177] - MINIKUBE_LOCATION=15452
I0706 18:07:45.100098 23028 notify.go:220] Checking for updates...
I0706 18:07:45.101492 23028 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0706 18:07:45.102799 23028 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/15452-9102/kubeconfig
I0706 18:07:45.104053 23028 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/15452-9102/.minikube
I0706 18:07:45.105384 23028 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0706 18:07:45.106831 23028 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0706 18:07:45.109979 23028 config.go:182] Loaded profile config "functional-645838": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
I0706 18:07:45.111022 23028 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0706 18:07:45.111075 23028 main.go:141] libmachine: Launching plugin server for driver kvm2
I0706 18:07:45.125433 23028 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39633
I0706 18:07:45.125823 23028 main.go:141] libmachine: () Calling .GetVersion
I0706 18:07:45.126278 23028 main.go:141] libmachine: Using API Version 1
I0706 18:07:45.126300 23028 main.go:141] libmachine: () Calling .SetConfigRaw
I0706 18:07:45.126592 23028 main.go:141] libmachine: () Calling .GetMachineName
I0706 18:07:45.126724 23028 main.go:141] libmachine: (functional-645838) Calling .DriverName
I0706 18:07:45.126954 23028 driver.go:373] Setting default libvirt URI to qemu:///system
I0706 18:07:45.127211 23028 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0706 18:07:45.127249 23028 main.go:141] libmachine: Launching plugin server for driver kvm2
I0706 18:07:45.142869 23028 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34557
I0706 18:07:45.143229 23028 main.go:141] libmachine: () Calling .GetVersion
I0706 18:07:45.143696 23028 main.go:141] libmachine: Using API Version 1
I0706 18:07:45.143741 23028 main.go:141] libmachine: () Calling .SetConfigRaw
I0706 18:07:45.144090 23028 main.go:141] libmachine: () Calling .GetMachineName
I0706 18:07:45.144267 23028 main.go:141] libmachine: (functional-645838) Calling .DriverName
I0706 18:07:45.180397 23028 out.go:177] * Using the kvm2 driver based on existing profile
I0706 18:07:45.181874 23028 start.go:297] selected driver: kvm2
I0706 18:07:45.181888 23028 start.go:944] validating driver "kvm2" against &{Name:functional-645838 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.27.3 ClusterName:functional-645838 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.124 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraD
isks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
I0706 18:07:45.181990 23028 start.go:955] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0706 18:07:45.182961 23028 cni.go:84] Creating CNI manager for ""
I0706 18:07:45.182984 23028 cni.go:152] "kvm2" driver + "containerd" runtime found, recommending bridge
I0706 18:07:45.182997 23028 start_flags.go:319] config:
{Name:functional-645838 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686675164-15452@sha256:1b5dd777e073cc98bda2dc463cdc550cd7c5b3dcdbff2b89d285943191470e34 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:functional-645838 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.124 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/miniku
be-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
I0706 18:07:45.184704 23028 out.go:177] * dry-run validation complete!
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
4b58563dd73b3 08a0c939e61b7 31 seconds ago Running kube-apiserver 0 b72c0cd81beea
27e0aad2fc004 7cffc01dba0e1 31 seconds ago Running kube-controller-manager 1 60839d25ee3e2
9d8a421e4946a 86b6af7dd652c 43 seconds ago Running etcd 1 a943987c66661
2996ff4c6392c 41697ceeb70b3 43 seconds ago Running kube-scheduler 1 ff4bbfd1b8cfb
7283a810d144e 5780543258cf0 44 seconds ago Running kube-proxy 1 06ac78c021e5d
45fd8c73c9a84 ead0a4a53df89 44 seconds ago Running coredns 1 16a35d8b3972d
847ae86609240 6e38f40d628db 49 seconds ago Running storage-provisioner 1 b922a33f9e365
76396e07223d2 6e38f40d628db About a minute ago Exited storage-provisioner 0 b922a33f9e365
5c03071b8750d ead0a4a53df89 About a minute ago Exited coredns 0 16a35d8b3972d
accc689f1f1eb 5780543258cf0 About a minute ago Exited kube-proxy 0 06ac78c021e5d
62a3cd769522c 86b6af7dd652c 2 minutes ago Exited etcd 0 a943987c66661
e9b928f0a7f63 41697ceeb70b3 2 minutes ago Exited kube-scheduler 0 ff4bbfd1b8cfb
b83bdd51bb137 7cffc01dba0e1 2 minutes ago Exited kube-controller-manager 0 60839d25ee3e2
*
* ==> containerd <==
* -- Journal begins at Thu 2023-07-06 18:05:22 UTC, ends at Thu 2023-07-06 18:07:48 UTC. --
Jul 06 18:07:44 functional-645838 containerd[2811]: time="2023-07-06T18:07:44.205130653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:hello-node-775766b4cc-ct7vz,Uid:af07169a-9956-46e9-9846-5a5e03fc029e,Namespace:default,Attempt:0,}"
Jul 06 18:07:44 functional-645838 containerd[2811]: time="2023-07-06T18:07:44.350025327Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul 06 18:07:44 functional-645838 containerd[2811]: time="2023-07-06T18:07:44.350125408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 06 18:07:44 functional-645838 containerd[2811]: time="2023-07-06T18:07:44.350140986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul 06 18:07:44 functional-645838 containerd[2811]: time="2023-07-06T18:07:44.350149693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 06 18:07:44 functional-645838 containerd[2811]: time="2023-07-06T18:07:44.835263943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:hello-node-775766b4cc-ct7vz,Uid:af07169a-9956-46e9-9846-5a5e03fc029e,Namespace:default,Attempt:0,} returns sandbox id \"0976e80bc3d0d30ee370e556c93807d94884b528dc4d28866775aed0ac42cfb4\""
Jul 06 18:07:44 functional-645838 containerd[2811]: time="2023-07-06T18:07:44.837969651Z" level=info msg="PullImage \"registry.k8s.io/echoserver:1.8\""
Jul 06 18:07:46 functional-645838 containerd[2811]: time="2023-07-06T18:07:46.036269964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-mount,Uid:ad13f420-8700-4873-8ae3-bfd36052b2c8,Namespace:default,Attempt:0,}"
Jul 06 18:07:46 functional-645838 containerd[2811]: time="2023-07-06T18:07:46.249909462Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul 06 18:07:46 functional-645838 containerd[2811]: time="2023-07-06T18:07:46.250718636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 06 18:07:46 functional-645838 containerd[2811]: time="2023-07-06T18:07:46.251070350Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul 06 18:07:46 functional-645838 containerd[2811]: time="2023-07-06T18:07:46.251347288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 06 18:07:46 functional-645838 containerd[2811]: time="2023-07-06T18:07:46.826164265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-mount,Uid:ad13f420-8700-4873-8ae3-bfd36052b2c8,Namespace:default,Attempt:0,} returns sandbox id \"17252280b9f8dfa57f1493ad7812ec65242294560b3704e9910b698482729c43\""
Jul 06 18:07:47 functional-645838 containerd[2811]: time="2023-07-06T18:07:47.640528511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kubernetes-dashboard-5c5cfc8747-rh4d4,Uid:3bc53922-23dd-4c2d-80f5-fce9078159a3,Namespace:kubernetes-dashboard,Attempt:0,}"
Jul 06 18:07:47 functional-645838 containerd[2811]: time="2023-07-06T18:07:47.687294226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:dashboard-metrics-scraper-5dd9cbfd69-d2lk7,Uid:10713f38-e41e-42ab-bc89-d3a12ccfd4ec,Namespace:kubernetes-dashboard,Attempt:0,}"
Jul 06 18:07:47 functional-645838 containerd[2811]: time="2023-07-06T18:07:47.803861286Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul 06 18:07:47 functional-645838 containerd[2811]: time="2023-07-06T18:07:47.804198952Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 06 18:07:47 functional-645838 containerd[2811]: time="2023-07-06T18:07:47.804445463Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul 06 18:07:47 functional-645838 containerd[2811]: time="2023-07-06T18:07:47.804683034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 06 18:07:48 functional-645838 containerd[2811]: time="2023-07-06T18:07:48.010697140Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul 06 18:07:48 functional-645838 containerd[2811]: time="2023-07-06T18:07:48.011544677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 06 18:07:48 functional-645838 containerd[2811]: time="2023-07-06T18:07:48.011756609Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul 06 18:07:48 functional-645838 containerd[2811]: time="2023-07-06T18:07:48.012264262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 06 18:07:48 functional-645838 containerd[2811]: time="2023-07-06T18:07:48.540373565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kubernetes-dashboard-5c5cfc8747-rh4d4,Uid:3bc53922-23dd-4c2d-80f5-fce9078159a3,Namespace:kubernetes-dashboard,Attempt:0,} returns sandbox id \"d446bf3f0aabbd0073562148449c5083c923cacefecc7eaf8f69c2260ed93938\""
Jul 06 18:07:48 functional-645838 containerd[2811]: time="2023-07-06T18:07:48.646173053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:dashboard-metrics-scraper-5dd9cbfd69-d2lk7,Uid:10713f38-e41e-42ab-bc89-d3a12ccfd4ec,Namespace:kubernetes-dashboard,Attempt:0,} returns sandbox id \"c9983b6a4e0ba3ad2f11959c94fb8882e4cd0cf601b2c30612d022d7a70098c3\""
*
* ==> coredns [45fd8c73c9a848dce84df515375f886a623499dfe77f474cac583b04eaa9597f] <==
* .:53
[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
CoreDNS-1.10.1
linux/amd64, go1.20, 055b2c3
[INFO] 127.0.0.1:43301 - 23855 "HINFO IN 3209423863026449032.1361236945974476689. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.00919158s
*
* ==> coredns [5c03071b8750d951c7735ef7933858af08ae21e7623b9dac0738d34d9919d0f9] <==
* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
CoreDNS-1.10.1
linux/amd64, go1.20, 055b2c3
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] Reloading
[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
[INFO] Reloading complete
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
*
* ==> describe nodes <==
* Name: functional-645838
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=functional-645838
kubernetes.io/os=linux
minikube.k8s.io/commit=b6e1a3abc91e215b081da44b95c5d4a34c954e9b
minikube.k8s.io/name=functional-645838
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2023_07_06T18_05_54_0700
minikube.k8s.io/version=v1.30.1
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Thu, 06 Jul 2023 18:05:51 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: functional-645838
AcquireTime: <unset>
RenewTime: Thu, 06 Jul 2023 18:07:41 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Thu, 06 Jul 2023 18:07:20 +0000 Thu, 06 Jul 2023 18:05:49 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 06 Jul 2023 18:07:20 +0000 Thu, 06 Jul 2023 18:05:49 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 06 Jul 2023 18:07:20 +0000 Thu, 06 Jul 2023 18:05:49 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Thu, 06 Jul 2023 18:07:20 +0000 Thu, 06 Jul 2023 18:05:54 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.124
Hostname: functional-645838
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 3914504Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 3914504Ki
pods: 110
System Info:
Machine ID: c88dc2d671f94231bf52ec4e5fde42e5
System UUID: c88dc2d6-71f9-4231-bf52-ec4e5fde42e5
Boot ID: f2f89435-d34c-40e0-9c97-fe2edb3ebc9d
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.7.2
Kubelet Version: v1.27.3
Kube-Proxy Version: v1.27.3
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (11 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox-mount 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4s
default hello-node-775766b4cc-ct7vz 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6s
kube-system coredns-5d78c9869d-9gcrb 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (1%!)(MISSING) 170Mi (4%!)(MISSING) 102s
kube-system etcd-functional-645838 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (2%!)(MISSING) 0 (0%!)(MISSING) 117s
kube-system kube-apiserver-functional-645838 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 29s
kube-system kube-controller-manager-functional-645838 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 116s
kube-system kube-proxy-6dkl9 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 102s
kube-system kube-scheduler-functional-645838 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 115s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 99s
kubernetes-dashboard dashboard-metrics-scraper-5dd9cbfd69-d2lk7 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2s
kubernetes-dashboard kubernetes-dashboard-5c5cfc8747-rh4d4 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (37%!)(MISSING) 0 (0%!)(MISSING)
memory 170Mi (4%!)(MISSING) 170Mi (4%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 99s kube-proxy
Normal Starting 41s kube-proxy
Normal NodeHasSufficientMemory 115s kubelet Node functional-645838 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 115s kubelet Node functional-645838 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 115s kubelet Node functional-645838 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 115s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 115s kubelet Node functional-645838 status is now: NodeReady
Normal Starting 115s kubelet Starting kubelet.
Normal RegisteredNode 103s node-controller Node functional-645838 event: Registered Node functional-645838 in Controller
Normal Starting 33s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 33s (x8 over 33s) kubelet Node functional-645838 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 33s (x8 over 33s) kubelet Node functional-645838 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 33s (x7 over 33s) kubelet Node functional-645838 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 33s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 16s node-controller Node functional-645838 event: Registered Node functional-645838 in Controller
*
* ==> dmesg <==
* [ +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
[ +5.030679] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
[ +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
[ +7.474383] systemd-fstab-generator[561]: Ignoring "noauto" for root device
[ +0.115742] systemd-fstab-generator[572]: Ignoring "noauto" for root device
[ +0.136773] systemd-fstab-generator[585]: Ignoring "noauto" for root device
[ +0.095829] systemd-fstab-generator[596]: Ignoring "noauto" for root device
[ +0.218763] systemd-fstab-generator[623]: Ignoring "noauto" for root device
[ +5.404235] systemd-fstab-generator[683]: Ignoring "noauto" for root device
[ +4.891564] systemd-fstab-generator[856]: Ignoring "noauto" for root device
[ +8.731297] systemd-fstab-generator[1220]: Ignoring "noauto" for root device
[Jul 6 18:06] kauditd_printk_skb: 26 callbacks suppressed
[ +21.815122] systemd-fstab-generator[2093]: Ignoring "noauto" for root device
[ +0.148075] systemd-fstab-generator[2104]: Ignoring "noauto" for root device
[ +0.151647] systemd-fstab-generator[2117]: Ignoring "noauto" for root device
[ +0.149612] systemd-fstab-generator[2128]: Ignoring "noauto" for root device
[ +0.272131] systemd-fstab-generator[2154]: Ignoring "noauto" for root device
[ +14.808465] systemd-fstab-generator[2743]: Ignoring "noauto" for root device
[ +0.128077] systemd-fstab-generator[2754]: Ignoring "noauto" for root device
[ +0.150993] systemd-fstab-generator[2767]: Ignoring "noauto" for root device
[ +0.135920] systemd-fstab-generator[2778]: Ignoring "noauto" for root device
[ +0.244179] systemd-fstab-generator[2804]: Ignoring "noauto" for root device
[Jul 6 18:07] systemd-fstab-generator[3554]: Ignoring "noauto" for root device
[ +28.708076] kauditd_printk_skb: 16 callbacks suppressed
*
* ==> etcd [62a3cd769522c1227e51bb232a94939e41d7819a08fe76328a55f310623784d5] <==
* {"level":"info","ts":"2023-07-06T18:05:48.833Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e1e7008e9cae601b","local-member-id":"d7d437db3895ee2c","added-peer-id":"d7d437db3895ee2c","added-peer-peer-urls":["https://192.168.39.124:2380"]}
{"level":"info","ts":"2023-07-06T18:05:49.511Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7d437db3895ee2c is starting a new election at term 1"}
{"level":"info","ts":"2023-07-06T18:05:49.511Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7d437db3895ee2c became pre-candidate at term 1"}
{"level":"info","ts":"2023-07-06T18:05:49.511Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7d437db3895ee2c received MsgPreVoteResp from d7d437db3895ee2c at term 1"}
{"level":"info","ts":"2023-07-06T18:05:49.512Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7d437db3895ee2c became candidate at term 2"}
{"level":"info","ts":"2023-07-06T18:05:49.512Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7d437db3895ee2c received MsgVoteResp from d7d437db3895ee2c at term 2"}
{"level":"info","ts":"2023-07-06T18:05:49.512Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7d437db3895ee2c became leader at term 2"}
{"level":"info","ts":"2023-07-06T18:05:49.512Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d7d437db3895ee2c elected leader d7d437db3895ee2c at term 2"}
{"level":"info","ts":"2023-07-06T18:05:49.514Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"d7d437db3895ee2c","local-member-attributes":"{Name:functional-645838 ClientURLs:[https://192.168.39.124:2379]}","request-path":"/0/members/d7d437db3895ee2c/attributes","cluster-id":"e1e7008e9cae601b","publish-timeout":"7s"}
{"level":"info","ts":"2023-07-06T18:05:49.514Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-07-06T18:05:49.515Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2023-07-06T18:05:49.517Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2023-07-06T18:05:49.518Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-07-06T18:05:49.526Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e1e7008e9cae601b","local-member-id":"d7d437db3895ee2c","cluster-version":"3.5"}
{"level":"info","ts":"2023-07-06T18:05:49.528Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2023-07-06T18:05:49.530Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2023-07-06T18:05:49.532Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.39.124:2379"}
{"level":"info","ts":"2023-07-06T18:05:49.532Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2023-07-06T18:05:49.532Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2023-07-06T18:07:04.656Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2023-07-06T18:07:04.657Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"functional-645838","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.124:2380"],"advertise-client-urls":["https://192.168.39.124:2379"]}
{"level":"info","ts":"2023-07-06T18:07:04.677Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"d7d437db3895ee2c","current-leader-member-id":"d7d437db3895ee2c"}
{"level":"info","ts":"2023-07-06T18:07:04.681Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.39.124:2380"}
{"level":"info","ts":"2023-07-06T18:07:04.681Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.39.124:2380"}
{"level":"info","ts":"2023-07-06T18:07:04.681Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"functional-645838","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.124:2380"],"advertise-client-urls":["https://192.168.39.124:2379"]}
*
* ==> etcd [9d8a421e4946a7a50dd7232f28c90f8432ecbf6ddf808559f4e2363d28057107] <==
* {"level":"info","ts":"2023-07-06T18:07:06.302Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
{"level":"info","ts":"2023-07-06T18:07:06.302Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
{"level":"info","ts":"2023-07-06T18:07:06.303Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7d437db3895ee2c switched to configuration voters=(15552116827903880748)"}
{"level":"info","ts":"2023-07-06T18:07:06.303Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e1e7008e9cae601b","local-member-id":"d7d437db3895ee2c","added-peer-id":"d7d437db3895ee2c","added-peer-peer-urls":["https://192.168.39.124:2380"]}
{"level":"info","ts":"2023-07-06T18:07:06.303Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e1e7008e9cae601b","local-member-id":"d7d437db3895ee2c","cluster-version":"3.5"}
{"level":"info","ts":"2023-07-06T18:07:06.303Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2023-07-06T18:07:06.308Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2023-07-06T18:07:06.308Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"d7d437db3895ee2c","initial-advertise-peer-urls":["https://192.168.39.124:2380"],"listen-peer-urls":["https://192.168.39.124:2380"],"advertise-client-urls":["https://192.168.39.124:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.124:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2023-07-06T18:07:06.308Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2023-07-06T18:07:06.308Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.39.124:2380"}
{"level":"info","ts":"2023-07-06T18:07:06.308Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.39.124:2380"}
{"level":"info","ts":"2023-07-06T18:07:07.389Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7d437db3895ee2c is starting a new election at term 2"}
{"level":"info","ts":"2023-07-06T18:07:07.389Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7d437db3895ee2c became pre-candidate at term 2"}
{"level":"info","ts":"2023-07-06T18:07:07.389Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7d437db3895ee2c received MsgPreVoteResp from d7d437db3895ee2c at term 2"}
{"level":"info","ts":"2023-07-06T18:07:07.390Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7d437db3895ee2c became candidate at term 3"}
{"level":"info","ts":"2023-07-06T18:07:07.390Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7d437db3895ee2c received MsgVoteResp from d7d437db3895ee2c at term 3"}
{"level":"info","ts":"2023-07-06T18:07:07.390Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7d437db3895ee2c became leader at term 3"}
{"level":"info","ts":"2023-07-06T18:07:07.390Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d7d437db3895ee2c elected leader d7d437db3895ee2c at term 3"}
{"level":"info","ts":"2023-07-06T18:07:07.397Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"d7d437db3895ee2c","local-member-attributes":"{Name:functional-645838 ClientURLs:[https://192.168.39.124:2379]}","request-path":"/0/members/d7d437db3895ee2c/attributes","cluster-id":"e1e7008e9cae601b","publish-timeout":"7s"}
{"level":"info","ts":"2023-07-06T18:07:07.397Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-07-06T18:07:07.398Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.39.124:2379"}
{"level":"info","ts":"2023-07-06T18:07:07.397Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-07-06T18:07:07.399Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2023-07-06T18:07:07.400Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2023-07-06T18:07:07.400Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
*
* ==> kernel <==
* 18:07:49 up 2 min, 0 users, load average: 1.75, 0.72, 0.27
Linux functional-645838 5.10.57 #1 SMP Fri Jun 30 21:41:53 UTC 2023 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2021.02.12"
*
* ==> kube-apiserver [4b58563dd73b3d3ed635dec05162e0fcfad16c56421b66ff020e69b7bbd0d914] <==
* I0706 18:07:20.372517 1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
I0706 18:07:20.376933 1 aggregator.go:152] initial CRD sync complete...
I0706 18:07:20.376969 1 autoregister_controller.go:141] Starting autoregister controller
I0706 18:07:20.376976 1 cache.go:32] Waiting for caches to sync for autoregister controller
I0706 18:07:20.376982 1 cache.go:39] Caches are synced for autoregister controller
I0706 18:07:20.378294 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0706 18:07:20.378516 1 apf_controller.go:366] Running API Priority and Fairness config worker
I0706 18:07:20.378717 1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
I0706 18:07:20.378305 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0706 18:07:20.385055 1 shared_informer.go:318] Caches are synced for configmaps
I0706 18:07:20.908110 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0706 18:07:20.989369 1 controller.go:624] quota admission added evaluator for: endpoints
I0706 18:07:21.261184 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0706 18:07:21.977561 1 controller.go:624] quota admission added evaluator for: serviceaccounts
I0706 18:07:21.986684 1 controller.go:624] quota admission added evaluator for: deployments.apps
I0706 18:07:22.031563 1 controller.go:624] quota admission added evaluator for: daemonsets.apps
I0706 18:07:22.065000 1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0706 18:07:22.077339 1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0706 18:07:33.543915 1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0706 18:07:39.532629 1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs=map[IPv4:10.102.181.151]
I0706 18:07:43.841517 1 controller.go:624] quota admission added evaluator for: replicasets.apps
I0706 18:07:43.976748 1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs=map[IPv4:10.98.170.147]
I0706 18:07:47.048190 1 controller.go:624] quota admission added evaluator for: namespaces
I0706 18:07:47.488331 1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.99.81.43]
I0706 18:07:47.526455 1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.98.22.130]
*
* ==> kube-controller-manager [27e0aad2fc004b71ca6e89d0152ea66146358b47ad4e0930a24dfbc4dfdc92b4] <==
* I0706 18:07:33.911942 1 shared_informer.go:318] Caches are synced for garbage collector
I0706 18:07:33.940585 1 shared_informer.go:318] Caches are synced for garbage collector
I0706 18:07:33.940648 1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
I0706 18:07:43.846372 1 event.go:307] "Event occurred" object="default/hello-node" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-775766b4cc to 1"
I0706 18:07:43.865562 1 event.go:307] "Event occurred" object="default/hello-node-775766b4cc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-775766b4cc-ct7vz"
I0706 18:07:47.130618 1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5dd9cbfd69 to 1"
I0706 18:07:47.145718 1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-5dd9cbfd69-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
E0706 18:07:47.174321 1 replica_set.go:544] sync "kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69" failed with pods "dashboard-metrics-scraper-5dd9cbfd69-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0706 18:07:47.174746 1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-5c5cfc8747 to 1"
E0706 18:07:47.185886 1 replica_set.go:544] sync "kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69" failed with pods "dashboard-metrics-scraper-5dd9cbfd69-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0706 18:07:47.186078 1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5c5cfc8747-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
I0706 18:07:47.186100 1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-5dd9cbfd69-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
E0706 18:07:47.205691 1 replica_set.go:544] sync "kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747" failed with pods "kubernetes-dashboard-5c5cfc8747-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
E0706 18:07:47.222537 1 replica_set.go:544] sync "kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69" failed with pods "dashboard-metrics-scraper-5dd9cbfd69-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
E0706 18:07:47.225219 1 replica_set.go:544] sync "kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747" failed with pods "kubernetes-dashboard-5c5cfc8747-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0706 18:07:47.225478 1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-5dd9cbfd69-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
I0706 18:07:47.225491 1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5c5cfc8747-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
E0706 18:07:47.236573 1 replica_set.go:544] sync "kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747" failed with pods "kubernetes-dashboard-5c5cfc8747-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0706 18:07:47.236649 1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5c5cfc8747-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
E0706 18:07:47.245557 1 replica_set.go:544] sync "kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747" failed with pods "kubernetes-dashboard-5c5cfc8747-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0706 18:07:47.245632 1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5c5cfc8747-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
E0706 18:07:47.261434 1 replica_set.go:544] sync "kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69" failed with pods "dashboard-metrics-scraper-5dd9cbfd69-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0706 18:07:47.261509 1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-5dd9cbfd69-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
I0706 18:07:47.309909 1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-5c5cfc8747-rh4d4"
I0706 18:07:47.325976 1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5dd9cbfd69-d2lk7"
*
* ==> kube-controller-manager [b83bdd51bb1375649411c364c57312700e9b892757534eeab4806a12b6fb0301] <==
* I0706 18:06:06.921723 1 shared_informer.go:318] Caches are synced for deployment
I0706 18:06:06.923197 1 shared_informer.go:318] Caches are synced for crt configmap
I0706 18:06:06.924416 1 shared_informer.go:318] Caches are synced for HPA
I0706 18:06:06.925597 1 shared_informer.go:318] Caches are synced for TTL
I0706 18:06:06.927710 1 shared_informer.go:318] Caches are synced for namespace
I0706 18:06:06.932670 1 shared_informer.go:318] Caches are synced for stateful set
I0706 18:06:06.935149 1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
I0706 18:06:06.936376 1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
I0706 18:06:06.938003 1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
I0706 18:06:06.940261 1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
I0706 18:06:07.021025 1 shared_informer.go:318] Caches are synced for cronjob
I0706 18:06:07.025214 1 shared_informer.go:318] Caches are synced for persistent volume
I0706 18:06:07.035650 1 shared_informer.go:318] Caches are synced for job
I0706 18:06:07.044933 1 shared_informer.go:318] Caches are synced for TTL after finished
I0706 18:06:07.064566 1 shared_informer.go:318] Caches are synced for resource quota
I0706 18:06:07.126172 1 shared_informer.go:318] Caches are synced for resource quota
I0706 18:06:07.451514 1 shared_informer.go:318] Caches are synced for garbage collector
I0706 18:06:07.521121 1 shared_informer.go:318] Caches are synced for garbage collector
I0706 18:06:07.521169 1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
I0706 18:06:07.589617 1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-6dkl9"
I0706 18:06:07.764403 1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5d78c9869d to 2"
I0706 18:06:07.929348 1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-q2hxj"
I0706 18:06:07.954332 1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-9gcrb"
I0706 18:06:08.135202 1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5d78c9869d to 1 from 2"
I0706 18:06:08.159488 1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5d78c9869d-q2hxj"
*
* ==> kube-proxy [7283a810d144e3a711d63448c915fffa7689b83097635ed40c83957a0af2a084] <==
* E0706 18:07:07.727637 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.124:8441: connect: connection refused
W0706 18:07:07.727930 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.124:8441: connect: connection refused
E0706 18:07:07.728248 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.124:8441: connect: connection refused
W0706 18:07:08.717738 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.124:8441: connect: connection refused
E0706 18:07:08.717977 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.124:8441: connect: connection refused
W0706 18:07:08.968355 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.124:8441: connect: connection refused
E0706 18:07:08.968453 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.124:8441: connect: connection refused
W0706 18:07:09.183363 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-645838&limit=500&resourceVersion=0": dial tcp 192.168.39.124:8441: connect: connection refused
E0706 18:07:09.183741 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-645838&limit=500&resourceVersion=0": dial tcp 192.168.39.124:8441: connect: connection refused
W0706 18:07:10.602317 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.124:8441: connect: connection refused
E0706 18:07:10.602389 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.124:8441: connect: connection refused
W0706 18:07:11.360921 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-645838&limit=500&resourceVersion=0": dial tcp 192.168.39.124:8441: connect: connection refused
E0706 18:07:11.361047 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-645838&limit=500&resourceVersion=0": dial tcp 192.168.39.124:8441: connect: connection refused
W0706 18:07:11.590477 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.124:8441: connect: connection refused
E0706 18:07:11.590859 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.124:8441: connect: connection refused
W0706 18:07:15.377224 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.124:8441: connect: connection refused
E0706 18:07:15.377257 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.124:8441: connect: connection refused
W0706 18:07:15.427333 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-645838&limit=500&resourceVersion=0": dial tcp 192.168.39.124:8441: connect: connection refused
E0706 18:07:15.427390 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-645838&limit=500&resourceVersion=0": dial tcp 192.168.39.124:8441: connect: connection refused
W0706 18:07:16.407709 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.124:8441: connect: connection refused
E0706 18:07:16.407847 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.124:8441: connect: connection refused
E0706 18:07:18.293545 1 event_broadcaster.go:274] Unable to write event: 'Post "https://control-plane.minikube.internal:8441/apis/events.k8s.io/v1/namespaces/default/events": dial tcp 192.168.39.124:8441: connect: connection refused' (may retry after sleeping)
I0706 18:07:27.325144 1 shared_informer.go:318] Caches are synced for service config
I0706 18:07:27.726709 1 shared_informer.go:318] Caches are synced for node config
I0706 18:07:28.324825 1 shared_informer.go:318] Caches are synced for endpoint slice config
*
* ==> kube-proxy [accc689f1f1eb182aa32d8224b2e64e1e1050f3f022f91779fc06433d4c3f6ee] <==
* I0706 18:06:10.103586 1 node.go:141] Successfully retrieved node IP: 192.168.39.124
I0706 18:06:10.104182 1 server_others.go:110] "Detected node IP" address="192.168.39.124"
I0706 18:06:10.104351 1 server_others.go:554] "Using iptables proxy"
I0706 18:06:10.140298 1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
I0706 18:06:10.140313 1 server_others.go:192] "Using iptables Proxier"
I0706 18:06:10.140340 1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I0706 18:06:10.141149 1 server.go:658] "Version info" version="v1.27.3"
I0706 18:06:10.141159 1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0706 18:06:10.142220 1 config.go:188] "Starting service config controller"
I0706 18:06:10.142239 1 shared_informer.go:311] Waiting for caches to sync for service config
I0706 18:06:10.142268 1 config.go:97] "Starting endpoint slice config controller"
I0706 18:06:10.142271 1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
I0706 18:06:10.142518 1 config.go:315] "Starting node config controller"
I0706 18:06:10.142523 1 shared_informer.go:311] Waiting for caches to sync for node config
I0706 18:06:10.242972 1 shared_informer.go:318] Caches are synced for node config
I0706 18:06:10.242996 1 shared_informer.go:318] Caches are synced for service config
I0706 18:06:10.243017 1 shared_informer.go:318] Caches are synced for endpoint slice config
*
* ==> kube-scheduler [2996ff4c6392c7853264ace44c43ac6af11a7dbc63742932bccc158e071774d9] <==
* W0706 18:07:16.790260 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.124:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.124:8441: connect: connection refused
E0706 18:07:16.790295 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.124:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.124:8441: connect: connection refused
W0706 18:07:16.908484 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.124:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.124:8441: connect: connection refused
E0706 18:07:16.908519 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.124:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.124:8441: connect: connection refused
W0706 18:07:17.073551 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: Get "https://192.168.39.124:8441/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.124:8441: connect: connection refused
E0706 18:07:17.073585 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.124:8441/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.124:8441: connect: connection refused
W0706 18:07:17.136350 1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.39.124:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.124:8441: connect: connection refused
E0706 18:07:17.136399 1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.124:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.124:8441: connect: connection refused
W0706 18:07:17.219302 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: Get "https://192.168.39.124:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.124:8441: connect: connection refused
E0706 18:07:17.219339 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.124:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.124:8441: connect: connection refused
W0706 18:07:17.304454 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://192.168.39.124:8441/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.124:8441: connect: connection refused
E0706 18:07:17.304511 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.124:8441/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.124:8441: connect: connection refused
W0706 18:07:17.440710 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: Get "https://192.168.39.124:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.124:8441: connect: connection refused
E0706 18:07:17.440916 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.124:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.124:8441: connect: connection refused
W0706 18:07:17.506894 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://192.168.39.124:8441/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.124:8441: connect: connection refused
E0706 18:07:17.507052 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.124:8441/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.124:8441: connect: connection refused
W0706 18:07:17.937276 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://192.168.39.124:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.124:8441: connect: connection refused
E0706 18:07:17.937373 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.124:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.124:8441: connect: connection refused
W0706 18:07:18.093266 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://192.168.39.124:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.124:8441: connect: connection refused
E0706 18:07:18.093342 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.124:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.124:8441: connect: connection refused
W0706 18:07:20.308118 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0706 18:07:20.308171 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
I0706 18:07:23.570747 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0706 18:07:26.669140 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
I0706 18:07:26.670349 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
*
* ==> kube-scheduler [e9b928f0a7f638a50e28118c36ca6bbab44955478a5aee558488d5c3a37aa090] <==
* W0706 18:05:52.116870 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0706 18:05:52.116920 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
W0706 18:05:52.185759 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0706 18:05:52.186098 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
W0706 18:05:52.216451 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0706 18:05:52.216985 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
W0706 18:05:52.236840 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0706 18:05:52.237053 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
W0706 18:05:52.324144 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0706 18:05:52.324251 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
W0706 18:05:52.353008 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0706 18:05:52.353119 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
W0706 18:05:52.447490 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0706 18:05:52.447703 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
W0706 18:05:52.481154 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0706 18:05:52.481211 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
W0706 18:05:52.542151 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0706 18:05:52.542323 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
W0706 18:05:52.677300 1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0706 18:05:52.677929 1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
I0706 18:05:55.408143 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0706 18:07:04.804997 1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
I0706 18:07:04.805066 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
E0706 18:07:04.805292 1 scheduling_queue.go:1135] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
E0706 18:07:04.805335 1 run.go:74] "command failed" err="finished without leader elect"
*
* ==> kubelet <==
* -- Journal begins at Thu 2023-07-06 18:05:22 UTC, ends at Thu 2023-07-06 18:07:49 UTC. --
Jul 06 18:07:22 functional-645838 kubelet[3560]: I0706 18:07:22.569626 3560 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=26a6e6d531a05fff1c8bd0b030fb008c path="/var/lib/kubelet/pods/26a6e6d531a05fff1c8bd0b030fb008c/volumes"
Jul 06 18:07:39 functional-645838 kubelet[3560]: I0706 18:07:39.506384 3560 topology_manager.go:212] "Topology Admit Handler"
Jul 06 18:07:39 functional-645838 kubelet[3560]: E0706 18:07:39.506487 3560 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="26a6e6d531a05fff1c8bd0b030fb008c" containerName="kube-apiserver"
Jul 06 18:07:39 functional-645838 kubelet[3560]: I0706 18:07:39.506513 3560 memory_manager.go:346] "RemoveStaleState removing state" podUID="26a6e6d531a05fff1c8bd0b030fb008c" containerName="kube-apiserver"
Jul 06 18:07:39 functional-645838 kubelet[3560]: I0706 18:07:39.586693 3560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7vvk\" (UniqueName: \"kubernetes.io/projected/2a8d5260-dd54-4257-a718-2fcaa8f52e76-kube-api-access-f7vvk\") pod \"invalid-svc\" (UID: \"2a8d5260-dd54-4257-a718-2fcaa8f52e76\") " pod="default/invalid-svc"
Jul 06 18:07:41 functional-645838 kubelet[3560]: E0706 18:07:41.322228 3560 remote_image.go:167] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nonexistingimage:latest\": failed to resolve reference \"docker.io/library/nonexistingimage:latest\": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed" image="nonexistingimage:latest"
Jul 06 18:07:41 functional-645838 kubelet[3560]: E0706 18:07:41.322276 3560 kuberuntime_image.go:53] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nonexistingimage:latest\": failed to resolve reference \"docker.io/library/nonexistingimage:latest\": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed" image="nonexistingimage:latest"
Jul 06 18:07:41 functional-645838 kubelet[3560]: E0706 18:07:41.322459 3560 kuberuntime_manager.go:1212] container &Container{Name:nginx,Image:nonexistingimage:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f7vvk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod invalid-svc_default(2a8d5260-dd54-4257-a718-2fcaa8f52e76): ErrImagePull: rpc
error: code = Unknown desc = failed to pull and unpack image "docker.io/library/nonexistingimage:latest": failed to resolve reference "docker.io/library/nonexistingimage:latest": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
Jul 06 18:07:41 functional-645838 kubelet[3560]: E0706 18:07:41.322495 3560 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"docker.io/library/nonexistingimage:latest\\\": failed to resolve reference \\\"docker.io/library/nonexistingimage:latest\\\": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed\"" pod="default/invalid-svc" podUID=2a8d5260-dd54-4257-a718-2fcaa8f52e76
Jul 06 18:07:41 functional-645838 kubelet[3560]: E0706 18:07:41.716227 3560 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nonexistingimage:latest\\\"\"" pod="default/invalid-svc" podUID=2a8d5260-dd54-4257-a718-2fcaa8f52e76
Jul 06 18:07:43 functional-645838 kubelet[3560]: I0706 18:07:43.116021 3560 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7vvk\" (UniqueName: \"kubernetes.io/projected/2a8d5260-dd54-4257-a718-2fcaa8f52e76-kube-api-access-f7vvk\") pod \"2a8d5260-dd54-4257-a718-2fcaa8f52e76\" (UID: \"2a8d5260-dd54-4257-a718-2fcaa8f52e76\") "
Jul 06 18:07:43 functional-645838 kubelet[3560]: I0706 18:07:43.123155 3560 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a8d5260-dd54-4257-a718-2fcaa8f52e76-kube-api-access-f7vvk" (OuterVolumeSpecName: "kube-api-access-f7vvk") pod "2a8d5260-dd54-4257-a718-2fcaa8f52e76" (UID: "2a8d5260-dd54-4257-a718-2fcaa8f52e76"). InnerVolumeSpecName "kube-api-access-f7vvk". PluginName "kubernetes.io/projected", VolumeGidValue ""
Jul 06 18:07:43 functional-645838 kubelet[3560]: I0706 18:07:43.216744 3560 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-f7vvk\" (UniqueName: \"kubernetes.io/projected/2a8d5260-dd54-4257-a718-2fcaa8f52e76-kube-api-access-f7vvk\") on node \"functional-645838\" DevicePath \"\""
Jul 06 18:07:43 functional-645838 kubelet[3560]: I0706 18:07:43.881488 3560 topology_manager.go:212] "Topology Admit Handler"
Jul 06 18:07:44 functional-645838 kubelet[3560]: I0706 18:07:44.021105 3560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wctw\" (UniqueName: \"kubernetes.io/projected/af07169a-9956-46e9-9846-5a5e03fc029e-kube-api-access-9wctw\") pod \"hello-node-775766b4cc-ct7vz\" (UID: \"af07169a-9956-46e9-9846-5a5e03fc029e\") " pod="default/hello-node-775766b4cc-ct7vz"
Jul 06 18:07:44 functional-645838 kubelet[3560]: I0706 18:07:44.570234 3560 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=2a8d5260-dd54-4257-a718-2fcaa8f52e76 path="/var/lib/kubelet/pods/2a8d5260-dd54-4257-a718-2fcaa8f52e76/volumes"
Jul 06 18:07:45 functional-645838 kubelet[3560]: I0706 18:07:45.707081 3560 topology_manager.go:212] "Topology Admit Handler"
Jul 06 18:07:45 functional-645838 kubelet[3560]: I0706 18:07:45.836625 3560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hktm8\" (UniqueName: \"kubernetes.io/projected/ad13f420-8700-4873-8ae3-bfd36052b2c8-kube-api-access-hktm8\") pod \"busybox-mount\" (UID: \"ad13f420-8700-4873-8ae3-bfd36052b2c8\") " pod="default/busybox-mount"
Jul 06 18:07:45 functional-645838 kubelet[3560]: I0706 18:07:45.836700 3560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/ad13f420-8700-4873-8ae3-bfd36052b2c8-test-volume\") pod \"busybox-mount\" (UID: \"ad13f420-8700-4873-8ae3-bfd36052b2c8\") " pod="default/busybox-mount"
Jul 06 18:07:47 functional-645838 kubelet[3560]: I0706 18:07:47.330512 3560 topology_manager.go:212] "Topology Admit Handler"
Jul 06 18:07:47 functional-645838 kubelet[3560]: I0706 18:07:47.378691 3560 topology_manager.go:212] "Topology Admit Handler"
Jul 06 18:07:47 functional-645838 kubelet[3560]: I0706 18:07:47.455019 3560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/3bc53922-23dd-4c2d-80f5-fce9078159a3-tmp-volume\") pod \"kubernetes-dashboard-5c5cfc8747-rh4d4\" (UID: \"3bc53922-23dd-4c2d-80f5-fce9078159a3\") " pod="kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747-rh4d4"
Jul 06 18:07:47 functional-645838 kubelet[3560]: I0706 18:07:47.455103 3560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/10713f38-e41e-42ab-bc89-d3a12ccfd4ec-tmp-volume\") pod \"dashboard-metrics-scraper-5dd9cbfd69-d2lk7\" (UID: \"10713f38-e41e-42ab-bc89-d3a12ccfd4ec\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69-d2lk7"
Jul 06 18:07:47 functional-645838 kubelet[3560]: I0706 18:07:47.455131 3560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd9q6\" (UniqueName: \"kubernetes.io/projected/3bc53922-23dd-4c2d-80f5-fce9078159a3-kube-api-access-gd9q6\") pod \"kubernetes-dashboard-5c5cfc8747-rh4d4\" (UID: \"3bc53922-23dd-4c2d-80f5-fce9078159a3\") " pod="kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747-rh4d4"
Jul 06 18:07:47 functional-645838 kubelet[3560]: I0706 18:07:47.455150 3560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glvvk\" (UniqueName: \"kubernetes.io/projected/10713f38-e41e-42ab-bc89-d3a12ccfd4ec-kube-api-access-glvvk\") pod \"dashboard-metrics-scraper-5dd9cbfd69-d2lk7\" (UID: \"10713f38-e41e-42ab-bc89-d3a12ccfd4ec\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69-d2lk7"
*
* ==> storage-provisioner [76396e07223d28ae9302588fe64eccc59f7e910afc77915360bff84c116d54c3] <==
* I0706 18:06:11.510914 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0706 18:06:11.525325 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0706 18:06:11.525569 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0706 18:06:11.535352 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0706 18:06:11.536544 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6418f319-a5fc-41d4-88a3-1142173d38b7", APIVersion:"v1", ResourceVersion:"383", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-645838_1b2ba7bc-87e3-48dd-b474-d128675eb7ed became leader
I0706 18:06:11.536679 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-645838_1b2ba7bc-87e3-48dd-b474-d128675eb7ed!
I0706 18:06:11.637162 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-645838_1b2ba7bc-87e3-48dd-b474-d128675eb7ed!
*
* ==> storage-provisioner [847ae866092409f7c6b5c21352724156661ecf0721ba606a83d5871e6e9ba946] <==
* I0706 18:06:59.859007 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0706 18:06:59.869714 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0706 18:06:59.869871 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
E0706 18:07:11.248938 1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
E0706 18:07:14.300426 1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
E0706 18:07:17.324137 1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
I0706 18:07:20.994513 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0706 18:07:20.995582 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6418f319-a5fc-41d4-88a3-1142173d38b7", APIVersion:"v1", ResourceVersion:"499", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-645838_7dc3cd29-d755-42b8-b278-c451398d4ba9 became leader
I0706 18:07:20.995664 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-645838_7dc3cd29-d755-42b8-b278-c451398d4ba9!
I0706 18:07:21.096953 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-645838_7dc3cd29-d755-42b8-b278-c451398d4ba9!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-645838 -n functional-645838
helpers_test.go:261: (dbg) Run: kubectl --context functional-645838 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount dashboard-metrics-scraper-5dd9cbfd69-d2lk7 kubernetes-dashboard-5c5cfc8747-rh4d4
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context functional-645838 describe pod busybox-mount dashboard-metrics-scraper-5dd9cbfd69-d2lk7 kubernetes-dashboard-5c5cfc8747-rh4d4
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-645838 describe pod busybox-mount dashboard-metrics-scraper-5dd9cbfd69-d2lk7 kubernetes-dashboard-5c5cfc8747-rh4d4: exit status 1 (74.868363ms)
-- stdout --
Name: busybox-mount
Namespace: default
Priority: 0
Service Account: default
Node: functional-645838/192.168.39.124
Start Time: Thu, 06 Jul 2023 18:07:45 +0000
Labels: integration-test=busybox-mount
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Containers:
mount-munger:
Container ID:
Image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID:
Port: <none>
Host Port: <none>
Command:
/bin/sh
-c
--
Args:
cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/mount-9p from test-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hktm8 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
test-volume:
Type: HostPath (bare host directory volume)
Path: /mount-9p
HostPathType:
kube-api-access-hktm8:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4s default-scheduler Successfully assigned default/busybox-mount to functional-645838
Normal Pulling 4s kubelet Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
-- /stdout --
** stderr **
Error from server (NotFound): pods "dashboard-metrics-scraper-5dd9cbfd69-d2lk7" not found
Error from server (NotFound): pods "kubernetes-dashboard-5c5cfc8747-rh4d4" not found
** /stderr **
helpers_test.go:279: kubectl --context functional-645838 describe pod busybox-mount dashboard-metrics-scraper-5dd9cbfd69-d2lk7 kubernetes-dashboard-5c5cfc8747-rh4d4: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (5.08s)