=== RUN TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd
=== CONT TestFunctional/parallel/DashboardCmd
functional_test.go:898: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-101929 --alsologtostderr -v=1]
=== CONT TestFunctional/parallel/DashboardCmd
functional_test.go:911: output didn't produce a URL
functional_test.go:903: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-101929 --alsologtostderr -v=1] ...
functional_test.go:903: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-101929 --alsologtostderr -v=1] stdout:
functional_test.go:903: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-101929 --alsologtostderr -v=1] stderr:
I0114 10:22:48.366932 15909 out.go:296] Setting OutFile to fd 1 ...
I0114 10:22:48.367114 15909 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0114 10:22:48.367121 15909 out.go:309] Setting ErrFile to fd 2...
I0114 10:22:48.367133 15909 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0114 10:22:48.367361 15909 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15642-4002/.minikube/bin
I0114 10:22:48.368004 15909 mustload.go:65] Loading cluster: functional-101929
I0114 10:22:48.368489 15909 config.go:180] Loaded profile config "functional-101929": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I0114 10:22:48.369043 15909 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0114 10:22:48.369098 15909 main.go:134] libmachine: Launching plugin server for driver kvm2
I0114 10:22:48.385348 15909 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:46591
I0114 10:22:48.385757 15909 main.go:134] libmachine: () Calling .GetVersion
I0114 10:22:48.386330 15909 main.go:134] libmachine: Using API Version 1
I0114 10:22:48.386355 15909 main.go:134] libmachine: () Calling .SetConfigRaw
I0114 10:22:48.386702 15909 main.go:134] libmachine: () Calling .GetMachineName
I0114 10:22:48.386896 15909 main.go:134] libmachine: (functional-101929) Calling .GetState
I0114 10:22:48.388486 15909 host.go:66] Checking if "functional-101929" exists ...
I0114 10:22:48.388750 15909 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0114 10:22:48.388789 15909 main.go:134] libmachine: Launching plugin server for driver kvm2
I0114 10:22:48.405149 15909 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:33789
I0114 10:22:48.405547 15909 main.go:134] libmachine: () Calling .GetVersion
I0114 10:22:48.406021 15909 main.go:134] libmachine: Using API Version 1
I0114 10:22:48.406049 15909 main.go:134] libmachine: () Calling .SetConfigRaw
I0114 10:22:48.406414 15909 main.go:134] libmachine: () Calling .GetMachineName
I0114 10:22:48.406573 15909 main.go:134] libmachine: (functional-101929) Calling .DriverName
I0114 10:22:48.406688 15909 api_server.go:165] Checking apiserver status ...
I0114 10:22:48.406727 15909 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0114 10:22:48.406752 15909 main.go:134] libmachine: (functional-101929) Calling .GetSSHHostname
I0114 10:22:48.409660 15909 main.go:134] libmachine: (functional-101929) DBG | domain functional-101929 has defined MAC address 52:54:00:1c:5a:e6 in network mk-functional-101929
I0114 10:22:48.410126 15909 main.go:134] libmachine: (functional-101929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:5a:e6", ip: ""} in network mk-functional-101929: {Iface:virbr1 ExpiryTime:2023-01-14 11:19:44 +0000 UTC Type:0 Mac:52:54:00:1c:5a:e6 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:functional-101929 Clientid:01:52:54:00:1c:5a:e6}
I0114 10:22:48.410157 15909 main.go:134] libmachine: (functional-101929) DBG | domain functional-101929 has defined IP address 192.168.39.97 and MAC address 52:54:00:1c:5a:e6 in network mk-functional-101929
I0114 10:22:48.410257 15909 main.go:134] libmachine: (functional-101929) Calling .GetSSHPort
I0114 10:22:48.410419 15909 main.go:134] libmachine: (functional-101929) Calling .GetSSHKeyPath
I0114 10:22:48.410513 15909 main.go:134] libmachine: (functional-101929) Calling .GetSSHUsername
I0114 10:22:48.410599 15909 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15642-4002/.minikube/machines/functional-101929/id_rsa Username:docker}
I0114 10:22:48.521778 15909 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/8200/cgroup
I0114 10:22:48.537841 15909 api_server.go:181] apiserver freezer: "9:freezer:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea9a85bf2ce5621968ebde74c119e86b.slice/docker-662249b9b6d3dfdde8f9e1885635babde59c608e53d06fde669650ea7da5d0bf.scope"
I0114 10:22:48.537907 15909 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea9a85bf2ce5621968ebde74c119e86b.slice/docker-662249b9b6d3dfdde8f9e1885635babde59c608e53d06fde669650ea7da5d0bf.scope/freezer.state
I0114 10:22:48.551246 15909 api_server.go:203] freezer state: "THAWED"
I0114 10:22:48.551276 15909 api_server.go:252] Checking apiserver healthz at https://192.168.39.97:8441/healthz ...
I0114 10:22:48.558777 15909 api_server.go:278] https://192.168.39.97:8441/healthz returned 200:
ok
W0114 10:22:48.558826 15909 out.go:239] * Enabling dashboard ...
* Enabling dashboard ...
I0114 10:22:48.559042 15909 config.go:180] Loaded profile config "functional-101929": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I0114 10:22:48.559052 15909 addons.go:65] Setting dashboard=true in profile "functional-101929"
I0114 10:22:48.559060 15909 addons.go:227] Setting addon dashboard=true in "functional-101929"
I0114 10:22:48.559086 15909 host.go:66] Checking if "functional-101929" exists ...
I0114 10:22:48.559447 15909 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0114 10:22:48.559484 15909 main.go:134] libmachine: Launching plugin server for driver kvm2
I0114 10:22:48.581964 15909 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:37381
I0114 10:22:48.582412 15909 main.go:134] libmachine: () Calling .GetVersion
I0114 10:22:48.582923 15909 main.go:134] libmachine: Using API Version 1
I0114 10:22:48.582952 15909 main.go:134] libmachine: () Calling .SetConfigRaw
I0114 10:22:48.583259 15909 main.go:134] libmachine: () Calling .GetMachineName
I0114 10:22:48.583796 15909 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0114 10:22:48.583834 15909 main.go:134] libmachine: Launching plugin server for driver kvm2
I0114 10:22:48.599320 15909 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:45619
I0114 10:22:48.599745 15909 main.go:134] libmachine: () Calling .GetVersion
I0114 10:22:48.600239 15909 main.go:134] libmachine: Using API Version 1
I0114 10:22:48.600266 15909 main.go:134] libmachine: () Calling .SetConfigRaw
I0114 10:22:48.600606 15909 main.go:134] libmachine: () Calling .GetMachineName
I0114 10:22:48.600758 15909 main.go:134] libmachine: (functional-101929) Calling .GetState
I0114 10:22:48.602600 15909 main.go:134] libmachine: (functional-101929) Calling .DriverName
I0114 10:22:48.607036 15909 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0114 10:22:48.608599 15909 out.go:177] - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I0114 10:22:48.609924 15909 addons.go:419] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0114 10:22:48.609949 15909 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0114 10:22:48.609970 15909 main.go:134] libmachine: (functional-101929) Calling .GetSSHHostname
I0114 10:22:48.613382 15909 main.go:134] libmachine: (functional-101929) DBG | domain functional-101929 has defined MAC address 52:54:00:1c:5a:e6 in network mk-functional-101929
I0114 10:22:48.613778 15909 main.go:134] libmachine: (functional-101929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:5a:e6", ip: ""} in network mk-functional-101929: {Iface:virbr1 ExpiryTime:2023-01-14 11:19:44 +0000 UTC Type:0 Mac:52:54:00:1c:5a:e6 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:functional-101929 Clientid:01:52:54:00:1c:5a:e6}
I0114 10:22:48.613803 15909 main.go:134] libmachine: (functional-101929) DBG | domain functional-101929 has defined IP address 192.168.39.97 and MAC address 52:54:00:1c:5a:e6 in network mk-functional-101929
I0114 10:22:48.614021 15909 main.go:134] libmachine: (functional-101929) Calling .GetSSHPort
I0114 10:22:48.614175 15909 main.go:134] libmachine: (functional-101929) Calling .GetSSHKeyPath
I0114 10:22:48.614304 15909 main.go:134] libmachine: (functional-101929) Calling .GetSSHUsername
I0114 10:22:48.614412 15909 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15642-4002/.minikube/machines/functional-101929/id_rsa Username:docker}
I0114 10:22:48.732976 15909 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0114 10:22:48.732997 15909 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0114 10:22:48.753988 15909 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0114 10:22:48.754011 15909 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0114 10:22:48.791400 15909 addons.go:419] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0114 10:22:48.791426 15909 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0114 10:22:48.818734 15909 addons.go:419] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0114 10:22:48.818755 15909 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I0114 10:22:48.849586 15909 addons.go:419] installing /etc/kubernetes/addons/dashboard-role.yaml
I0114 10:22:48.849610 15909 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0114 10:22:48.879509 15909 addons.go:419] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0114 10:22:48.879529 15909 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0114 10:22:48.909810 15909 addons.go:419] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0114 10:22:48.909837 15909 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0114 10:22:48.937225 15909 addons.go:419] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0114 10:22:48.937250 15909 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0114 10:22:48.966707 15909 addons.go:419] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0114 10:22:48.966732 15909 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0114 10:22:48.985882 15909 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0114 10:22:50.276180 15909 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.290252341s)
I0114 10:22:50.276250 15909 main.go:134] libmachine: Making call to close driver server
I0114 10:22:50.276270 15909 main.go:134] libmachine: (functional-101929) Calling .Close
I0114 10:22:50.276530 15909 main.go:134] libmachine: Successfully made call to close driver server
I0114 10:22:50.276553 15909 main.go:134] libmachine: Making call to close connection to plugin binary
I0114 10:22:50.276563 15909 main.go:134] libmachine: Making call to close driver server
I0114 10:22:50.276572 15909 main.go:134] libmachine: (functional-101929) Calling .Close
I0114 10:22:50.276768 15909 main.go:134] libmachine: (functional-101929) DBG | Closing plugin on server side
I0114 10:22:50.276809 15909 main.go:134] libmachine: Successfully made call to close driver server
I0114 10:22:50.276821 15909 main.go:134] libmachine: Making call to close connection to plugin binary
I0114 10:22:50.278924 15909 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p functional-101929 addons enable metrics-server
I0114 10:22:50.280259 15909 addons.go:190] Writing out "functional-101929" config to set dashboard=true...
W0114 10:22:50.280511 15909 out.go:239] * Verifying dashboard health ...
* Verifying dashboard health ...
I0114 10:22:50.281205 15909 kapi.go:59] client config for functional-101929: &rest.Config{Host:"https://192.168.39.97:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15642-4002/.minikube/profiles/functional-101929/client.crt", KeyFile:"/home/jenkins/minikube-integration/15642-4002/.minikube/profiles/functional-101929/client.key", CAFile:"/home/jenkins/minikube-integration/15642-4002/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1888dc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0114 10:22:50.290184 15909 service.go:214] Found service: &Service{ObjectMeta:{kubernetes-dashboard kubernetes-dashboard 0042f998-ead6-4020-94cd-3f5903597aa0 767 0 2023-01-14 10:22:50 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2023-01-14 10:22:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.103.22.190,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.103.22.190],IPFamilies:[IPv4],AllocateLoadBalance
rNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W0114 10:22:50.290316 15909 out.go:239] * Launching proxy ...
* Launching proxy ...
I0114 10:22:50.290370 15909 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-101929 proxy --port 36195]
I0114 10:22:50.290615 15909 dashboard.go:157] Waiting for kubectl to output host:port ...
I0114 10:22:50.340031 15909 out.go:177]
W0114 10:22:50.341841 15909 out.go:239] X Exiting due to HOST_KUBECTL_PROXY: readByteWithTimeout: EOF
X Exiting due to HOST_KUBECTL_PROXY: readByteWithTimeout: EOF
W0114 10:22:50.341865 15909 out.go:239] *
*
W0114 10:22:50.343987 15909 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ * Please also attach the following file to the GitHub issue: │
│ * - /tmp/minikube_dashboard_2f9e80c8c4dc47927ad6915561a20c5705c3b3b4_0.log │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ * Please also attach the following file to the GitHub issue: │
│ * - /tmp/minikube_dashboard_2f9e80c8c4dc47927ad6915561a20c5705c3b3b4_0.log │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0114 10:22:50.345583 15909 out.go:177]
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p functional-101929 -n functional-101929
=== CONT TestFunctional/parallel/DashboardCmd
helpers_test.go:244: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p functional-101929 logs -n 25
=== CONT TestFunctional/parallel/DashboardCmd
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-101929 logs -n 25: (1.751241702s)
=== CONT TestFunctional/parallel/DashboardCmd
helpers_test.go:252: TestFunctional/parallel/DashboardCmd logs:
-- stdout --
*
* ==> Audit <==
* |------------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|------------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
| ssh | functional-101929 ssh findmnt | functional-101929 | jenkins | v1.28.0 | 14 Jan 23 10:22 UTC | 14 Jan 23 10:22 UTC |
| | -T /mount-9p | grep 9p | | | | | |
| ssh | functional-101929 ssh -- ls | functional-101929 | jenkins | v1.28.0 | 14 Jan 23 10:22 UTC | 14 Jan 23 10:22 UTC |
| | -la /mount-9p | | | | | |
| ssh | functional-101929 ssh cat | functional-101929 | jenkins | v1.28.0 | 14 Jan 23 10:22 UTC | 14 Jan 23 10:22 UTC |
| | /mount-9p/test-1673691756250419096 | | | | | |
| ssh | functional-101929 ssh stat | functional-101929 | jenkins | v1.28.0 | 14 Jan 23 10:22 UTC | 14 Jan 23 10:22 UTC |
| | /mount-9p/created-by-test | | | | | |
| ssh | functional-101929 ssh stat | functional-101929 | jenkins | v1.28.0 | 14 Jan 23 10:22 UTC | 14 Jan 23 10:22 UTC |
| | /mount-9p/created-by-pod | | | | | |
| ssh | functional-101929 ssh sudo | functional-101929 | jenkins | v1.28.0 | 14 Jan 23 10:22 UTC | 14 Jan 23 10:22 UTC |
| | umount -f /mount-9p | | | | | |
| service | functional-101929 service | functional-101929 | jenkins | v1.28.0 | 14 Jan 23 10:22 UTC | 14 Jan 23 10:22 UTC |
| | hello-node-connect --url | | | | | |
| ssh | functional-101929 ssh findmnt | functional-101929 | jenkins | v1.28.0 | 14 Jan 23 10:22 UTC | |
| | -T /mount-9p | grep 9p | | | | | |
| mount | -p functional-101929 | functional-101929 | jenkins | v1.28.0 | 14 Jan 23 10:22 UTC | |
| | /tmp/TestFunctionalparallelMountCmdspecific-port3360609635/001:/mount-9p | | | | | |
| | --alsologtostderr -v=1 --port 46464 | | | | | |
| service | functional-101929 service list | functional-101929 | jenkins | v1.28.0 | 14 Jan 23 10:22 UTC | 14 Jan 23 10:22 UTC |
| service | functional-101929 service | functional-101929 | jenkins | v1.28.0 | 14 Jan 23 10:22 UTC | 14 Jan 23 10:22 UTC |
| | --namespace=default --https | | | | | |
| | --url hello-node | | | | | |
| service | functional-101929 | functional-101929 | jenkins | v1.28.0 | 14 Jan 23 10:22 UTC | 14 Jan 23 10:22 UTC |
| | service hello-node --url | | | | | |
| | --format={{.IP}} | | | | | |
| ssh | functional-101929 ssh findmnt | functional-101929 | jenkins | v1.28.0 | 14 Jan 23 10:22 UTC | 14 Jan 23 10:22 UTC |
| | -T /mount-9p | grep 9p | | | | | |
| service | functional-101929 service | functional-101929 | jenkins | v1.28.0 | 14 Jan 23 10:22 UTC | 14 Jan 23 10:22 UTC |
| | hello-node --url | | | | | |
| ssh | functional-101929 ssh -- ls | functional-101929 | jenkins | v1.28.0 | 14 Jan 23 10:22 UTC | 14 Jan 23 10:22 UTC |
| | -la /mount-9p | | | | | |
| start | -p functional-101929 | functional-101929 | jenkins | v1.28.0 | 14 Jan 23 10:22 UTC | |
| | --dry-run --memory | | | | | |
| | 250MB --alsologtostderr | | | | | |
| | --driver=kvm2 | | | | | |
| start | -p functional-101929 | functional-101929 | jenkins | v1.28.0 | 14 Jan 23 10:22 UTC | |
| | --dry-run --memory | | | | | |
| | 250MB --alsologtostderr | | | | | |
| | --driver=kvm2 | | | | | |
| dashboard | --url --port 36195 | functional-101929 | jenkins | v1.28.0 | 14 Jan 23 10:22 UTC | |
| | -p functional-101929 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| start | -p functional-101929 --dry-run | functional-101929 | jenkins | v1.28.0 | 14 Jan 23 10:22 UTC | |
| | --alsologtostderr -v=1 | | | | | |
| | --driver=kvm2 | | | | | |
| ssh | functional-101929 ssh sudo | functional-101929 | jenkins | v1.28.0 | 14 Jan 23 10:22 UTC | |
| | umount -f /mount-9p | | | | | |
| ssh | functional-101929 ssh sudo | functional-101929 | jenkins | v1.28.0 | 14 Jan 23 10:22 UTC | |
| | systemctl is-active crio | | | | | |
| license | | minikube | jenkins | v1.28.0 | 14 Jan 23 10:22 UTC | 14 Jan 23 10:22 UTC |
| ssh | functional-101929 ssh sudo cat | functional-101929 | jenkins | v1.28.0 | 14 Jan 23 10:22 UTC | 14 Jan 23 10:22 UTC |
| | /etc/test/nested/copy/10851/hosts | | | | | |
| docker-env | functional-101929 docker-env | functional-101929 | jenkins | v1.28.0 | 14 Jan 23 10:22 UTC | 14 Jan 23 10:22 UTC |
| docker-env | functional-101929 docker-env | functional-101929 | jenkins | v1.28.0 | 14 Jan 23 10:22 UTC | 14 Jan 23 10:22 UTC |
|------------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2023/01/14 10:22:48
Running on machine: ubuntu-20-agent-6
Binary: Built with gc go1.19.3 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0114 10:22:48.449461 15946 out.go:296] Setting OutFile to fd 1 ...
I0114 10:22:48.449569 15946 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0114 10:22:48.449578 15946 out.go:309] Setting ErrFile to fd 2...
I0114 10:22:48.449585 15946 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0114 10:22:48.449698 15946 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15642-4002/.minikube/bin
I0114 10:22:48.450241 15946 out.go:303] Setting JSON to false
I0114 10:22:48.451133 15946 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3915,"bootTime":1673687854,"procs":246,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0114 10:22:48.451193 15946 start.go:135] virtualization: kvm guest
I0114 10:22:48.453545 15946 out.go:177] * [functional-101929] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
I0114 10:22:48.455106 15946 out.go:177] - MINIKUBE_LOCATION=15642
I0114 10:22:48.454992 15946 notify.go:220] Checking for updates...
I0114 10:22:48.456581 15946 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0114 10:22:48.458165 15946 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/15642-4002/kubeconfig
I0114 10:22:48.459759 15946 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-4002/.minikube
I0114 10:22:48.461218 15946 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0114 10:22:48.463149 15946 config.go:180] Loaded profile config "functional-101929": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I0114 10:22:48.463718 15946 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0114 10:22:48.463782 15946 main.go:134] libmachine: Launching plugin server for driver kvm2
I0114 10:22:48.482743 15946 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:44223
I0114 10:22:48.483079 15946 main.go:134] libmachine: () Calling .GetVersion
I0114 10:22:48.483588 15946 main.go:134] libmachine: Using API Version 1
I0114 10:22:48.483614 15946 main.go:134] libmachine: () Calling .SetConfigRaw
I0114 10:22:48.483986 15946 main.go:134] libmachine: () Calling .GetMachineName
I0114 10:22:48.484201 15946 main.go:134] libmachine: (functional-101929) Calling .DriverName
I0114 10:22:48.484397 15946 driver.go:365] Setting default libvirt URI to qemu:///system
I0114 10:22:48.484713 15946 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0114 10:22:48.484736 15946 main.go:134] libmachine: Launching plugin server for driver kvm2
I0114 10:22:48.499088 15946 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:45067
I0114 10:22:48.499465 15946 main.go:134] libmachine: () Calling .GetVersion
I0114 10:22:48.499895 15946 main.go:134] libmachine: Using API Version 1
I0114 10:22:48.499917 15946 main.go:134] libmachine: () Calling .SetConfigRaw
I0114 10:22:48.500331 15946 main.go:134] libmachine: () Calling .GetMachineName
I0114 10:22:48.500508 15946 main.go:134] libmachine: (functional-101929) Calling .DriverName
I0114 10:22:48.534393 15946 out.go:177] * Using the kvm2 driver based on existing profile
I0114 10:22:48.535652 15946 start.go:294] selected driver: kvm2
I0114 10:22:48.535675 15946 start.go:838] validating driver "kvm2" against &{Name:functional-101929 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15235/minikube-v1.28.0-1668700269-15235-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.25.3 ClusterName:functional-101929 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.97 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:fals
e nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
I0114 10:22:48.535843 15946 start.go:849] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0114 10:22:48.537009 15946 cni.go:95] Creating CNI manager for ""
I0114 10:22:48.537031 15946 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0114 10:22:48.537045 15946 start_flags.go:319] config:
{Name:functional-101929 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15235/minikube-v1.28.0-1668700269-15235-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:functional-101929 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.97 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-s
ecurity-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
I0114 10:22:48.538642 15946 out.go:177] * dry-run validation complete!
*
* ==> Docker <==
* -- Journal begins at Sat 2023-01-14 10:19:40 UTC, ends at Sat 2023-01-14 10:22:51 UTC. --
Jan 14 10:22:42 functional-101929 dockerd[7031]: time="2023-01-14T10:22:42.990206598Z" level=info msg="shim disconnected" id=4fe6f4288a44c734a242b9c9120e6c7f2c8665fdaff3e87a560f62a660dd2492
Jan 14 10:22:42 functional-101929 dockerd[7031]: time="2023-01-14T10:22:42.990274218Z" level=warning msg="cleaning up after shim disconnected" id=4fe6f4288a44c734a242b9c9120e6c7f2c8665fdaff3e87a560f62a660dd2492 namespace=moby
Jan 14 10:22:42 functional-101929 dockerd[7031]: time="2023-01-14T10:22:42.990292245Z" level=info msg="cleaning up dead shim"
Jan 14 10:22:43 functional-101929 dockerd[7031]: time="2023-01-14T10:22:43.012006286Z" level=warning msg="cleanup warnings time=\"2023-01-14T10:22:42Z\" level=info msg=\"starting signal loop\" namespace=moby pid=10179 runtime=io.containerd.runc.v2\n"
Jan 14 10:22:44 functional-101929 dockerd[7025]: time="2023-01-14T10:22:44.150920177Z" level=info msg="ignoring event" container=0c897e21caec7104cb976b08f41f4d4c391aa7b7d6bc56b4566d69244e7ccc53 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 14 10:22:44 functional-101929 dockerd[7031]: time="2023-01-14T10:22:44.151476621Z" level=info msg="shim disconnected" id=0c897e21caec7104cb976b08f41f4d4c391aa7b7d6bc56b4566d69244e7ccc53
Jan 14 10:22:44 functional-101929 dockerd[7031]: time="2023-01-14T10:22:44.151546439Z" level=warning msg="cleaning up after shim disconnected" id=0c897e21caec7104cb976b08f41f4d4c391aa7b7d6bc56b4566d69244e7ccc53 namespace=moby
Jan 14 10:22:44 functional-101929 dockerd[7031]: time="2023-01-14T10:22:44.151558182Z" level=info msg="cleaning up dead shim"
Jan 14 10:22:44 functional-101929 dockerd[7031]: time="2023-01-14T10:22:44.164342160Z" level=warning msg="cleanup warnings time=\"2023-01-14T10:22:44Z\" level=info msg=\"starting signal loop\" namespace=moby pid=10213 runtime=io.containerd.runc.v2\n"
Jan 14 10:22:50 functional-101929 dockerd[7031]: time="2023-01-14T10:22:50.385741640Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 14 10:22:50 functional-101929 dockerd[7031]: time="2023-01-14T10:22:50.385786393Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 14 10:22:50 functional-101929 dockerd[7031]: time="2023-01-14T10:22:50.385968521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 14 10:22:50 functional-101929 dockerd[7031]: time="2023-01-14T10:22:50.386885951Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/0b8bb602074f1626e9e9a014202a6ae03103f6b5159563cb7fa523a9fc5b9bfc pid=10522 runtime=io.containerd.runc.v2
Jan 14 10:22:50 functional-101929 dockerd[7031]: time="2023-01-14T10:22:50.408494138Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 14 10:22:50 functional-101929 dockerd[7031]: time="2023-01-14T10:22:50.408574821Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 14 10:22:50 functional-101929 dockerd[7031]: time="2023-01-14T10:22:50.408586431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 14 10:22:50 functional-101929 dockerd[7031]: time="2023-01-14T10:22:50.409002604Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/8ab6f34547d848a8c5c277663f96e24242ca90d5f651a13e2650b30bd6766a77 pid=10540 runtime=io.containerd.runc.v2
Jan 14 10:22:50 functional-101929 dockerd[7031]: time="2023-01-14T10:22:50.729260096Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 14 10:22:50 functional-101929 dockerd[7031]: time="2023-01-14T10:22:50.729347707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 14 10:22:50 functional-101929 dockerd[7031]: time="2023-01-14T10:22:50.729360600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 14 10:22:50 functional-101929 dockerd[7031]: time="2023-01-14T10:22:50.729490994Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/6ab99194b24a7f64a71ac8e0a0ce5a9df1ebfdbcb5a8bdde0e3a449847c16ca8 pid=10655 runtime=io.containerd.runc.v2
Jan 14 10:22:50 functional-101929 dockerd[7031]: time="2023-01-14T10:22:50.885027826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 14 10:22:50 functional-101929 dockerd[7031]: time="2023-01-14T10:22:50.885131476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 14 10:22:50 functional-101929 dockerd[7031]: time="2023-01-14T10:22:50.885149877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 14 10:22:50 functional-101929 dockerd[7031]: time="2023-01-14T10:22:50.887896325Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/5dfca06ae9d25542b8dee48710f9fd7270d1f275fb90607ab6ea226aa258ee67 pid=10694 runtime=io.containerd.runc.v2
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
0b8bb602074f1 nginx@sha256:b8f2383a95879e1ae064940d9a200f67a6c79e710ed82ac42263397367e7cc4e 1 second ago Running myfrontend 0 d270042e693bf
4fe6f4288a44c gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e 9 seconds ago Exited mount-munger 0 0c897e21caec7
ee51d49becfd8 k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969 11 seconds ago Running echoserver 0 2fa22299385e8
cec9dbe775c94 k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969 11 seconds ago Running echoserver 0 19e664be1ffc8
57fd1ef65853c 6e38f40d628db 36 seconds ago Running storage-provisioner 3 2edd71218fb3a
f4cf41b7a789b 5185b96f0becf 36 seconds ago Running coredns 3 b33e8d57bcf3a
346374e1a8105 beaaf00edd38a 36 seconds ago Running kube-proxy 2 2d16d6c469917
d2acb554058ad 6d23ec0e8b87e 43 seconds ago Running kube-scheduler 3 b5a05bd4b5bfd
63d23c5f3319d a8a176a5d5d69 43 seconds ago Running etcd 3 fb2246303985c
d6ff8dba06d14 6039992312758 44 seconds ago Running kube-controller-manager 3 9853a039e0174
662249b9b6d3d 0346dbd74bcb9 44 seconds ago Running kube-apiserver 0 30a60d02f02ef
fe58c94596efc 0346dbd74bcb9 About a minute ago Exited kube-apiserver 2 5f7c1ec799c65
354e7efc780ef 5185b96f0becf About a minute ago Exited coredns 2 65716b73ad53a
a526f6daec052 6e38f40d628db About a minute ago Exited storage-provisioner 2 a4b749c848639
0efe7d2376321 a8a176a5d5d69 About a minute ago Exited etcd 2 331e4edc23c3f
667c195cb4078 6039992312758 About a minute ago Exited kube-controller-manager 2 50dbedfb01b78
c161ff402f218 6d23ec0e8b87e About a minute ago Exited kube-scheduler 2 e0a64980087e7
b00430b089413 beaaf00edd38a About a minute ago Exited kube-proxy 1 579b275b3256b
*
* ==> coredns [354e7efc780e] <==
* .:53
[INFO] plugin/reload: Running configuration SHA512 = 9a34f9264402cb585a9f45fa2022f72259f38c0069ff0551404dff6d373c3318d40dccb7d57503b326f0f19faa2110be407c171bae22df1ef9dd2930a017b6e6
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
*
* ==> coredns [f4cf41b7a789] <==
* .:53
[INFO] plugin/reload: Running configuration SHA512 = 9a34f9264402cb585a9f45fa2022f72259f38c0069ff0551404dff6d373c3318d40dccb7d57503b326f0f19faa2110be407c171bae22df1ef9dd2930a017b6e6
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
*
* ==> describe nodes <==
* Name: functional-101929
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=functional-101929
kubernetes.io/os=linux
minikube.k8s.io/commit=59da54e5a04973bd17dc62cf57cb4173bab7bf81
minikube.k8s.io/name=functional-101929
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2023_01_14T10_20_24_0700
minikube.k8s.io/version=v1.28.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sat, 14 Jan 2023 10:20:21 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: functional-101929
AcquireTime: <unset>
RenewTime: Sat, 14 Jan 2023 10:22:43 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Sat, 14 Jan 2023 10:22:13 +0000 Sat, 14 Jan 2023 10:20:19 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sat, 14 Jan 2023 10:22:13 +0000 Sat, 14 Jan 2023 10:20:19 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sat, 14 Jan 2023 10:22:13 +0000 Sat, 14 Jan 2023 10:20:19 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sat, 14 Jan 2023 10:22:13 +0000 Sat, 14 Jan 2023 10:20:35 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.97
Hostname: functional-101929
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 3914504Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 3914504Ki
pods: 110
System Info:
Machine ID: f8d25365c3fc43349fafa995aab525ca
System UUID: f8d25365-c3fc-4334-9faf-a995aab525ca
Boot ID: e6324964-2a4d-4979-8db2-1d6e2da96aae
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.21
Kubelet Version: v1.25.3
Kube-Proxy Version: v1.25.3
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (13 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default hello-node-5fcdfb5cc4-p2jf4 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 17s
default hello-node-connect-6458c8fb6f-qmp48 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 17s
default mysql-596b7fcdbf-mphb5 600m (30%!)(MISSING) 700m (35%!)(MISSING) 512Mi (13%!)(MISSING) 700Mi (18%!)(MISSING) 2s
default sp-pod 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 12s
kube-system coredns-565d847f94-prqt2 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (1%!)(MISSING) 170Mi (4%!)(MISSING) 2m15s
kube-system etcd-functional-101929 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (2%!)(MISSING) 0 (0%!)(MISSING) 2m27s
kube-system kube-apiserver-functional-101929 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 38s
kube-system kube-controller-manager-functional-101929 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m27s
kube-system kube-proxy-wjfgl 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m15s
kube-system kube-scheduler-functional-101929 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m27s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m12s
kubernetes-dashboard dashboard-metrics-scraper-5f5c79dd8f-qvjjj 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2s
kubernetes-dashboard kubernetes-dashboard-f87d45d87-2qxk5 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 1350m (67%!)(MISSING) 700m (35%!)(MISSING)
memory 682Mi (17%!)(MISSING) 870Mi (22%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 2m12s kube-proxy
Normal Starting 36s kube-proxy
Normal Starting 90s kube-proxy
Normal Starting 2m27s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 2m27s kubelet Node functional-101929 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 2m27s kubelet Node functional-101929 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2m27s kubelet Node functional-101929 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 2m27s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 2m16s kubelet Node functional-101929 status is now: NodeReady
Normal RegisteredNode 2m16s node-controller Node functional-101929 event: Registered Node functional-101929 in Controller
Normal NodeNotReady 116s kubelet Node functional-101929 status is now: NodeNotReady
Normal RegisteredNode 78s node-controller Node functional-101929 event: Registered Node functional-101929 in Controller
Normal Starting 45s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 45s (x8 over 45s) kubelet Node functional-101929 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 45s (x8 over 45s) kubelet Node functional-101929 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 45s (x7 over 45s) kubelet Node functional-101929 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 45s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 25s node-controller Node functional-101929 event: Registered Node functional-101929 in Controller
*
* ==> dmesg <==
* [Jan14 10:20] systemd-fstab-generator[735]: Ignoring "noauto" for root device
[ +3.877665] kauditd_printk_skb: 14 callbacks suppressed
[ +0.298388] systemd-fstab-generator[897]: Ignoring "noauto" for root device
[ +0.112483] systemd-fstab-generator[908]: Ignoring "noauto" for root device
[ +0.101344] systemd-fstab-generator[919]: Ignoring "noauto" for root device
[ +1.443152] systemd-fstab-generator[1070]: Ignoring "noauto" for root device
[ +0.104745] systemd-fstab-generator[1081]: Ignoring "noauto" for root device
[ +4.873231] systemd-fstab-generator[1346]: Ignoring "noauto" for root device
[ +0.451045] kauditd_printk_skb: 68 callbacks suppressed
[ +11.256737] systemd-fstab-generator[2011]: Ignoring "noauto" for root device
[ +12.967335] kauditd_printk_skb: 8 callbacks suppressed
[ +12.050358] kauditd_printk_skb: 20 callbacks suppressed
[ +3.879450] systemd-fstab-generator[3187]: Ignoring "noauto" for root device
[ +0.148623] systemd-fstab-generator[3198]: Ignoring "noauto" for root device
[ +0.155205] systemd-fstab-generator[3209]: Ignoring "noauto" for root device
[Jan14 10:21] systemd-fstab-generator[4597]: Ignoring "noauto" for root device
[ +0.145470] systemd-fstab-generator[4620]: Ignoring "noauto" for root device
[ +10.516240] kauditd_printk_skb: 31 callbacks suppressed
[ +24.395470] systemd-fstab-generator[6229]: Ignoring "noauto" for root device
[ +0.177527] systemd-fstab-generator[6315]: Ignoring "noauto" for root device
[ +0.174314] systemd-fstab-generator[6350]: Ignoring "noauto" for root device
[Jan14 10:22] systemd-fstab-generator[7435]: Ignoring "noauto" for root device
[ +0.134951] systemd-fstab-generator[7471]: Ignoring "noauto" for root device
[ +2.116400] systemd-fstab-generator[7809]: Ignoring "noauto" for root device
[ +8.181523] kauditd_printk_skb: 31 callbacks suppressed
*
* ==> etcd [0efe7d237632] <==
* {"level":"info","ts":"2023-01-14T10:21:16.761Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2023-01-14T10:21:16.761Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.97:2380"}
{"level":"info","ts":"2023-01-14T10:21:16.761Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.97:2380"}
{"level":"info","ts":"2023-01-14T10:21:18.030Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 is starting a new election at term 3"}
{"level":"info","ts":"2023-01-14T10:21:18.030Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 became pre-candidate at term 3"}
{"level":"info","ts":"2023-01-14T10:21:18.030Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 received MsgPreVoteResp from f61fae125a956d36 at term 3"}
{"level":"info","ts":"2023-01-14T10:21:18.030Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 became candidate at term 4"}
{"level":"info","ts":"2023-01-14T10:21:18.030Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 received MsgVoteResp from f61fae125a956d36 at term 4"}
{"level":"info","ts":"2023-01-14T10:21:18.030Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 became leader at term 4"}
{"level":"info","ts":"2023-01-14T10:21:18.030Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f61fae125a956d36 elected leader f61fae125a956d36 at term 4"}
{"level":"info","ts":"2023-01-14T10:21:18.036Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f61fae125a956d36","local-member-attributes":"{Name:functional-101929 ClientURLs:[https://192.168.39.97:2379]}","request-path":"/0/members/f61fae125a956d36/attributes","cluster-id":"6e56e32a1e97f390","publish-timeout":"7s"}
{"level":"info","ts":"2023-01-14T10:21:18.036Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-01-14T10:21:18.037Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-01-14T10:21:18.038Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.97:2379"}
{"level":"info","ts":"2023-01-14T10:21:18.038Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2023-01-14T10:21:18.039Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2023-01-14T10:21:18.039Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2023-01-14T10:21:46.619Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2023-01-14T10:21:46.619Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"functional-101929","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.97:2380"],"advertise-client-urls":["https://192.168.39.97:2379"]}
WARNING: 2023/01/14 10:21:46 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
WARNING: 2023/01/14 10:21:46 [core] grpc: addrConn.createTransport failed to connect to {192.168.39.97:2379 192.168.39.97:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.39.97:2379: connect: connection refused". Reconnecting...
{"level":"info","ts":"2023-01-14T10:21:46.646Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f61fae125a956d36","current-leader-member-id":"f61fae125a956d36"}
{"level":"info","ts":"2023-01-14T10:21:46.649Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.39.97:2380"}
{"level":"info","ts":"2023-01-14T10:21:46.650Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.39.97:2380"}
{"level":"info","ts":"2023-01-14T10:21:46.650Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"functional-101929","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.97:2380"],"advertise-client-urls":["https://192.168.39.97:2379"]}
*
* ==> etcd [63d23c5f3319] <==
* {"level":"info","ts":"2023-01-14T10:22:09.776Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2023-01-14T10:22:09.777Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.97:2380"}
{"level":"info","ts":"2023-01-14T10:22:09.777Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.97:2380"}
{"level":"info","ts":"2023-01-14T10:22:09.777Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2023-01-14T10:22:09.777Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f61fae125a956d36","initial-advertise-peer-urls":["https://192.168.39.97:2380"],"listen-peer-urls":["https://192.168.39.97:2380"],"advertise-client-urls":["https://192.168.39.97:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.97:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2023-01-14T10:22:11.119Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 is starting a new election at term 4"}
{"level":"info","ts":"2023-01-14T10:22:11.120Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 became pre-candidate at term 4"}
{"level":"info","ts":"2023-01-14T10:22:11.120Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 received MsgPreVoteResp from f61fae125a956d36 at term 4"}
{"level":"info","ts":"2023-01-14T10:22:11.120Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 became candidate at term 5"}
{"level":"info","ts":"2023-01-14T10:22:11.120Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 received MsgVoteResp from f61fae125a956d36 at term 5"}
{"level":"info","ts":"2023-01-14T10:22:11.120Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 became leader at term 5"}
{"level":"info","ts":"2023-01-14T10:22:11.120Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f61fae125a956d36 elected leader f61fae125a956d36 at term 5"}
{"level":"info","ts":"2023-01-14T10:22:11.122Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f61fae125a956d36","local-member-attributes":"{Name:functional-101929 ClientURLs:[https://192.168.39.97:2379]}","request-path":"/0/members/f61fae125a956d36/attributes","cluster-id":"6e56e32a1e97f390","publish-timeout":"7s"}
{"level":"info","ts":"2023-01-14T10:22:11.122Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-01-14T10:22:11.123Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2023-01-14T10:22:11.123Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2023-01-14T10:22:11.123Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-01-14T10:22:11.123Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2023-01-14T10:22:11.125Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.97:2379"}
{"level":"warn","ts":"2023-01-14T10:22:50.088Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"335.019796ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7869624388982618964 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kubernetes-dashboard/kubernetes-dashboard-f87d45d87-2qxk5\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/kubernetes-dashboard/kubernetes-dashboard-f87d45d87-2qxk5\" value_size:2687 >> failure:<>>","response":"size:16"}
{"level":"info","ts":"2023-01-14T10:22:50.089Z","caller":"traceutil/trace.go:171","msg":"trace[489555664] transaction","detail":"{read_only:false; response_revision:750; number_of_response:1; }","duration":"336.463864ms","start":"2023-01-14T10:22:49.752Z","end":"2023-01-14T10:22:50.089Z","steps":["trace[489555664] 'compare' (duration: 334.675735ms)"],"step_count":1}
{"level":"warn","ts":"2023-01-14T10:22:50.089Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-01-14T10:22:49.752Z","time spent":"336.802641ms","remote":"127.0.0.1:39558","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2767,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kubernetes-dashboard/kubernetes-dashboard-f87d45d87-2qxk5\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/kubernetes-dashboard/kubernetes-dashboard-f87d45d87-2qxk5\" value_size:2687 >> failure:<>"}
{"level":"info","ts":"2023-01-14T10:22:50.095Z","caller":"traceutil/trace.go:171","msg":"trace[936770606] transaction","detail":"{read_only:false; response_revision:751; number_of_response:1; }","duration":"342.428132ms","start":"2023-01-14T10:22:49.752Z","end":"2023-01-14T10:22:50.095Z","steps":["trace[936770606] 'process raft request' (duration: 342.080307ms)"],"step_count":1}
{"level":"warn","ts":"2023-01-14T10:22:50.095Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-01-14T10:22:49.752Z","time spent":"342.488803ms","remote":"127.0.0.1:39544","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":987,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/secrets/kubernetes-dashboard/kubernetes-dashboard-key-holder\" mod_revision:0 > success:<request_put:<key:\"/registry/secrets/kubernetes-dashboard/kubernetes-dashboard-key-holder\" value_size:909 >> failure:<>"}
{"level":"info","ts":"2023-01-14T10:22:50.095Z","caller":"traceutil/trace.go:171","msg":"trace[1524391402] transaction","detail":"{read_only:false; response_revision:752; number_of_response:1; }","duration":"288.055657ms","start":"2023-01-14T10:22:49.807Z","end":"2023-01-14T10:22:50.095Z","steps":["trace[1524391402] 'process raft request' (duration: 287.550274ms)"],"step_count":1}
*
* ==> kernel <==
* 10:22:52 up 3 min, 0 users, load average: 2.14, 1.11, 0.44
Linux functional-101929 5.10.57 #1 SMP Thu Nov 17 20:18:45 UTC 2022 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2021.02.12"
*
* ==> kube-apiserver [662249b9b6d3] <==
* I0114 10:22:13.304178 1 controller.go:85] Starting OpenAPI controller
I0114 10:22:13.304191 1 controller.go:85] Starting OpenAPI V3 controller
I0114 10:22:13.304203 1 naming_controller.go:291] Starting NamingConditionController
I0114 10:22:13.304213 1 establishing_controller.go:76] Starting EstablishingController
I0114 10:22:13.304219 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I0114 10:22:13.304226 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0114 10:22:13.304232 1 crd_finalizer.go:266] Starting CRDFinalizer
I0114 10:22:13.443302 1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
I0114 10:22:13.447121 1 shared_informer.go:262] Caches are synced for node_authorizer
I0114 10:22:14.005194 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0114 10:22:14.274787 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0114 10:22:14.989468 1 controller.go:616] quota admission added evaluator for: serviceaccounts
I0114 10:22:14.998121 1 controller.go:616] quota admission added evaluator for: deployments.apps
I0114 10:22:15.040536 1 controller.go:616] quota admission added evaluator for: daemonsets.apps
I0114 10:22:15.089206 1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0114 10:22:15.097871 1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0114 10:22:33.118628 1 controller.go:616] quota admission added evaluator for: endpoints
I0114 10:22:34.460621 1 controller.go:616] quota admission added evaluator for: replicasets.apps
I0114 10:22:34.579008 1 alloc.go:327] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs=map[IPv4:10.99.79.1]
I0114 10:22:34.598815 1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0114 10:22:34.885546 1 alloc.go:327] "allocated clusterIPs" service="default/hello-node" clusterIPs=map[IPv4:10.106.14.29]
I0114 10:22:49.311896 1 alloc.go:327] "allocated clusterIPs" service="default/mysql" clusterIPs=map[IPv4:10.105.193.71]
I0114 10:22:49.422241 1 controller.go:616] quota admission added evaluator for: namespaces
I0114 10:22:50.202735 1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.103.22.190]
I0114 10:22:50.243584 1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.106.48.159]
*
* ==> kube-apiserver [fe58c94596ef] <==
* }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
W0114 10:22:00.161619 1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {
"Addr": "127.0.0.1:2379",
"ServerName": "127.0.0.1",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
W0114 10:22:00.814178 1 logging.go:59] [core] [Channel #3 SubChannel #5] grpc: addrConn.createTransport failed to connect to {
"Addr": "127.0.0.1:2379",
"ServerName": "127.0.0.1",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
W0114 10:22:01.062119 1 logging.go:59] [core] [Channel #4 SubChannel #6] grpc: addrConn.createTransport failed to connect to {
"Addr": "127.0.0.1:2379",
"ServerName": "127.0.0.1",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
*
* ==> kube-controller-manager [667c195cb407] <==
* I0114 10:21:33.739022 1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
I0114 10:21:33.744248 1 shared_informer.go:262] Caches are synced for namespace
I0114 10:21:33.746544 1 shared_informer.go:262] Caches are synced for deployment
I0114 10:21:33.755004 1 shared_informer.go:262] Caches are synced for ephemeral
I0114 10:21:33.758510 1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
I0114 10:21:33.760014 1 shared_informer.go:262] Caches are synced for stateful set
I0114 10:21:33.761554 1 shared_informer.go:262] Caches are synced for endpoint_slice
I0114 10:21:33.768094 1 shared_informer.go:262] Caches are synced for node
I0114 10:21:33.768328 1 range_allocator.go:166] Starting range CIDR allocator
I0114 10:21:33.768468 1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
I0114 10:21:33.768751 1 shared_informer.go:262] Caches are synced for cidrallocator
I0114 10:21:33.769850 1 shared_informer.go:262] Caches are synced for expand
I0114 10:21:33.773075 1 shared_informer.go:262] Caches are synced for TTL after finished
I0114 10:21:33.778299 1 shared_informer.go:262] Caches are synced for endpoint
I0114 10:21:33.789045 1 shared_informer.go:262] Caches are synced for bootstrap_signer
I0114 10:21:33.791655 1 shared_informer.go:262] Caches are synced for PVC protection
I0114 10:21:33.791752 1 shared_informer.go:262] Caches are synced for HPA
I0114 10:21:33.812292 1 shared_informer.go:262] Caches are synced for disruption
I0114 10:21:33.817900 1 shared_informer.go:262] Caches are synced for ReplicationController
I0114 10:21:33.871011 1 shared_informer.go:262] Caches are synced for resource quota
I0114 10:21:33.878451 1 shared_informer.go:262] Caches are synced for resource quota
I0114 10:21:33.918824 1 shared_informer.go:262] Caches are synced for attach detach
I0114 10:21:34.307035 1 shared_informer.go:262] Caches are synced for garbage collector
I0114 10:21:34.329122 1 shared_informer.go:262] Caches are synced for garbage collector
I0114 10:21:34.329140 1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
*
* ==> kube-controller-manager [d6ff8dba06d1] <==
* I0114 10:22:49.518439 1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f5c79dd8f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-5f5c79dd8f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
E0114 10:22:49.526593 1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-5f5c79dd8f" failed with pods "dashboard-metrics-scraper-5f5c79dd8f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
E0114 10:22:49.541388 1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-5f5c79dd8f" failed with pods "dashboard-metrics-scraper-5f5c79dd8f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0114 10:22:49.542300 1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f5c79dd8f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-5f5c79dd8f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
I0114 10:22:49.542312 1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-f87d45d87 to 1"
E0114 10:22:49.561250 1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-5f5c79dd8f" failed with pods "dashboard-metrics-scraper-5f5c79dd8f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0114 10:22:49.561822 1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f5c79dd8f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-5f5c79dd8f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
I0114 10:22:49.561836 1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-f87d45d87" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-f87d45d87-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
E0114 10:22:49.578600 1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-f87d45d87" failed with pods "kubernetes-dashboard-f87d45d87-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
E0114 10:22:49.583274 1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-5f5c79dd8f" failed with pods "dashboard-metrics-scraper-5f5c79dd8f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0114 10:22:49.583527 1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f5c79dd8f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-5f5c79dd8f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
E0114 10:22:49.590159 1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-f87d45d87" failed with pods "kubernetes-dashboard-f87d45d87-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
E0114 10:22:49.590462 1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-5f5c79dd8f" failed with pods "dashboard-metrics-scraper-5f5c79dd8f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0114 10:22:49.590489 1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-f87d45d87" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-f87d45d87-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
I0114 10:22:49.590502 1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f5c79dd8f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-5f5c79dd8f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
E0114 10:22:49.615048 1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-f87d45d87" failed with pods "kubernetes-dashboard-f87d45d87-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0114 10:22:49.615272 1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-f87d45d87" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-f87d45d87-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
E0114 10:22:49.633985 1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-f87d45d87" failed with pods "kubernetes-dashboard-f87d45d87-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0114 10:22:49.634059 1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-f87d45d87" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-f87d45d87-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
E0114 10:22:49.644361 1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-5f5c79dd8f" failed with pods "dashboard-metrics-scraper-5f5c79dd8f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0114 10:22:49.644408 1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f5c79dd8f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-5f5c79dd8f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
I0114 10:22:49.647659 1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-f87d45d87" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-f87d45d87-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
E0114 10:22:49.647844 1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-f87d45d87" failed with pods "kubernetes-dashboard-f87d45d87-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0114 10:22:50.091052 1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-f87d45d87" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-f87d45d87-2qxk5"
I0114 10:22:50.099661 1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f5c79dd8f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f5c79dd8f-qvjjj"
*
* ==> kube-proxy [346374e1a810] <==
* I0114 10:22:15.826197 1 node.go:163] Successfully retrieved node IP: 192.168.39.97
I0114 10:22:15.826264 1 server_others.go:138] "Detected node IP" address="192.168.39.97"
I0114 10:22:15.826287 1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I0114 10:22:15.877550 1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
I0114 10:22:15.877567 1 server_others.go:206] "Using iptables Proxier"
I0114 10:22:15.877585 1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I0114 10:22:15.877960 1 server.go:661] "Version info" version="v1.25.3"
I0114 10:22:15.877992 1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0114 10:22:15.886201 1 config.go:317] "Starting service config controller"
I0114 10:22:15.886236 1 shared_informer.go:255] Waiting for caches to sync for service config
I0114 10:22:15.886314 1 config.go:226] "Starting endpoint slice config controller"
I0114 10:22:15.886341 1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
I0114 10:22:15.886931 1 config.go:444] "Starting node config controller"
I0114 10:22:15.886962 1 shared_informer.go:255] Waiting for caches to sync for node config
I0114 10:22:15.986758 1 shared_informer.go:262] Caches are synced for endpoint slice config
I0114 10:22:15.986794 1 shared_informer.go:262] Caches are synced for service config
I0114 10:22:15.987047 1 shared_informer.go:262] Caches are synced for node config
*
* ==> kube-proxy [b00430b08941] <==
* I0114 10:21:21.363403 1 node.go:163] Successfully retrieved node IP: 192.168.39.97
I0114 10:21:21.363940 1 server_others.go:138] "Detected node IP" address="192.168.39.97"
I0114 10:21:21.364224 1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I0114 10:21:21.479608 1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
I0114 10:21:21.479645 1 server_others.go:206] "Using iptables Proxier"
I0114 10:21:21.479786 1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I0114 10:21:21.480267 1 server.go:661] "Version info" version="v1.25.3"
I0114 10:21:21.480302 1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0114 10:21:21.484201 1 config.go:317] "Starting service config controller"
I0114 10:21:21.484212 1 shared_informer.go:255] Waiting for caches to sync for service config
I0114 10:21:21.484234 1 config.go:226] "Starting endpoint slice config controller"
I0114 10:21:21.484237 1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
I0114 10:21:21.484869 1 config.go:444] "Starting node config controller"
I0114 10:21:21.484904 1 shared_informer.go:255] Waiting for caches to sync for node config
I0114 10:21:21.584488 1 shared_informer.go:262] Caches are synced for endpoint slice config
I0114 10:21:21.584547 1 shared_informer.go:262] Caches are synced for service config
I0114 10:21:21.585180 1 shared_informer.go:262] Caches are synced for node config
*
* ==> kube-scheduler [c161ff402f21] <==
* E0114 10:21:21.272987 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
W0114 10:21:21.273036 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0114 10:21:21.273067 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
W0114 10:21:21.273117 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0114 10:21:21.273124 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
W0114 10:21:21.273159 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0114 10:21:21.273167 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
W0114 10:21:21.273230 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0114 10:21:21.273238 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
W0114 10:21:21.273282 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0114 10:21:21.273289 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
W0114 10:21:21.273324 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0114 10:21:21.273331 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
W0114 10:21:21.273363 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0114 10:21:21.273371 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
W0114 10:21:21.273416 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0114 10:21:21.273425 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
W0114 10:21:21.273657 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0114 10:21:21.273734 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
I0114 10:21:22.638787 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0114 10:21:46.816227 1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
I0114 10:21:46.816386 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
I0114 10:21:46.816656 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
E0114 10:21:46.816745 1 scheduling_queue.go:963] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
E0114 10:21:46.816790 1 run.go:74] "command failed" err="finished without leader elect"
*
* ==> kube-scheduler [d2acb554058a] <==
* I0114 10:22:10.492634 1 serving.go:348] Generated self-signed cert in-memory
W0114 10:22:13.321482 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0114 10:22:13.321877 1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0114 10:22:13.322176 1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
W0114 10:22:13.322202 1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0114 10:22:13.373767 1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
I0114 10:22:13.373960 1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0114 10:22:13.375917 1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
I0114 10:22:13.379159 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0114 10:22:13.379361 1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0114 10:22:13.379181 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0114 10:22:13.480218 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kubelet <==
* -- Journal begins at Sat 2023-01-14 10:19:40 UTC, ends at Sat 2023-01-14 10:22:52 UTC. --
Jan 14 10:22:37 functional-101929 kubelet[7815]: I0114 10:22:37.754205 7815 topology_manager.go:205] "Topology Admit Handler"
Jan 14 10:22:37 functional-101929 kubelet[7815]: I0114 10:22:37.870803 7815 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/79f32661-00fa-4f08-8bdf-e3fccba88898-test-volume\") pod \"busybox-mount\" (UID: \"79f32661-00fa-4f08-8bdf-e3fccba88898\") " pod="default/busybox-mount"
Jan 14 10:22:37 functional-101929 kubelet[7815]: I0114 10:22:37.870846 7815 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbd5m\" (UniqueName: \"kubernetes.io/projected/79f32661-00fa-4f08-8bdf-e3fccba88898-kube-api-access-kbd5m\") pod \"busybox-mount\" (UID: \"79f32661-00fa-4f08-8bdf-e3fccba88898\") " pod="default/busybox-mount"
Jan 14 10:22:38 functional-101929 kubelet[7815]: I0114 10:22:38.971878 7815 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="0c897e21caec7104cb976b08f41f4d4c391aa7b7d6bc56b4566d69244e7ccc53"
Jan 14 10:22:39 functional-101929 kubelet[7815]: I0114 10:22:39.663727 7815 topology_manager.go:205] "Topology Admit Handler"
Jan 14 10:22:39 functional-101929 kubelet[7815]: I0114 10:22:39.782620 7815 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-04455ae2-8b2e-481a-b409-9519898adf8f\" (UniqueName: \"kubernetes.io/host-path/db28c053-2b62-4cfe-9a29-f19ea84f3788-pvc-04455ae2-8b2e-481a-b409-9519898adf8f\") pod \"sp-pod\" (UID: \"db28c053-2b62-4cfe-9a29-f19ea84f3788\") " pod="default/sp-pod"
Jan 14 10:22:39 functional-101929 kubelet[7815]: I0114 10:22:39.782718 7815 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czm5t\" (UniqueName: \"kubernetes.io/projected/db28c053-2b62-4cfe-9a29-f19ea84f3788-kube-api-access-czm5t\") pod \"sp-pod\" (UID: \"db28c053-2b62-4cfe-9a29-f19ea84f3788\") " pod="default/sp-pod"
Jan 14 10:22:44 functional-101929 kubelet[7815]: I0114 10:22:44.316212 7815 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbd5m\" (UniqueName: \"kubernetes.io/projected/79f32661-00fa-4f08-8bdf-e3fccba88898-kube-api-access-kbd5m\") pod \"79f32661-00fa-4f08-8bdf-e3fccba88898\" (UID: \"79f32661-00fa-4f08-8bdf-e3fccba88898\") "
Jan 14 10:22:44 functional-101929 kubelet[7815]: I0114 10:22:44.316285 7815 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/79f32661-00fa-4f08-8bdf-e3fccba88898-test-volume\") pod \"79f32661-00fa-4f08-8bdf-e3fccba88898\" (UID: \"79f32661-00fa-4f08-8bdf-e3fccba88898\") "
Jan 14 10:22:44 functional-101929 kubelet[7815]: I0114 10:22:44.316374 7815 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79f32661-00fa-4f08-8bdf-e3fccba88898-test-volume" (OuterVolumeSpecName: "test-volume") pod "79f32661-00fa-4f08-8bdf-e3fccba88898" (UID: "79f32661-00fa-4f08-8bdf-e3fccba88898"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Jan 14 10:22:44 functional-101929 kubelet[7815]: I0114 10:22:44.320924 7815 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79f32661-00fa-4f08-8bdf-e3fccba88898-kube-api-access-kbd5m" (OuterVolumeSpecName: "kube-api-access-kbd5m") pod "79f32661-00fa-4f08-8bdf-e3fccba88898" (UID: "79f32661-00fa-4f08-8bdf-e3fccba88898"). InnerVolumeSpecName "kube-api-access-kbd5m". PluginName "kubernetes.io/projected", VolumeGidValue ""
Jan 14 10:22:44 functional-101929 kubelet[7815]: I0114 10:22:44.417284 7815 reconciler.go:399] "Volume detached for volume \"kube-api-access-kbd5m\" (UniqueName: \"kubernetes.io/projected/79f32661-00fa-4f08-8bdf-e3fccba88898-kube-api-access-kbd5m\") on node \"functional-101929\" DevicePath \"\""
Jan 14 10:22:44 functional-101929 kubelet[7815]: I0114 10:22:44.417311 7815 reconciler.go:399] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/79f32661-00fa-4f08-8bdf-e3fccba88898-test-volume\") on node \"functional-101929\" DevicePath \"\""
Jan 14 10:22:45 functional-101929 kubelet[7815]: I0114 10:22:45.112820 7815 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="0c897e21caec7104cb976b08f41f4d4c391aa7b7d6bc56b4566d69244e7ccc53"
Jan 14 10:22:49 functional-101929 kubelet[7815]: I0114 10:22:49.352607 7815 topology_manager.go:205] "Topology Admit Handler"
Jan 14 10:22:49 functional-101929 kubelet[7815]: E0114 10:22:49.352755 7815 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="79f32661-00fa-4f08-8bdf-e3fccba88898" containerName="mount-munger"
Jan 14 10:22:49 functional-101929 kubelet[7815]: I0114 10:22:49.352801 7815 memory_manager.go:345] "RemoveStaleState removing state" podUID="79f32661-00fa-4f08-8bdf-e3fccba88898" containerName="mount-munger"
Jan 14 10:22:49 functional-101929 kubelet[7815]: I0114 10:22:49.461952 7815 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq992\" (UniqueName: \"kubernetes.io/projected/d2b9a27b-e145-480f-ab22-9370cbc49fe6-kube-api-access-pq992\") pod \"mysql-596b7fcdbf-mphb5\" (UID: \"d2b9a27b-e145-480f-ab22-9370cbc49fe6\") " pod="default/mysql-596b7fcdbf-mphb5"
Jan 14 10:22:50 functional-101929 kubelet[7815]: I0114 10:22:50.109746 7815 topology_manager.go:205] "Topology Admit Handler"
Jan 14 10:22:50 functional-101929 kubelet[7815]: I0114 10:22:50.112637 7815 topology_manager.go:205] "Topology Admit Handler"
Jan 14 10:22:50 functional-101929 kubelet[7815]: I0114 10:22:50.181283 7815 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v824q\" (UniqueName: \"kubernetes.io/projected/f6e06a1c-11fe-4224-aa32-b29a20116240-kube-api-access-v824q\") pod \"kubernetes-dashboard-f87d45d87-2qxk5\" (UID: \"f6e06a1c-11fe-4224-aa32-b29a20116240\") " pod="kubernetes-dashboard/kubernetes-dashboard-f87d45d87-2qxk5"
Jan 14 10:22:50 functional-101929 kubelet[7815]: I0114 10:22:50.181539 7815 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkv8n\" (UniqueName: \"kubernetes.io/projected/ca82315e-15db-4b1b-a5b1-8697cd50e03a-kube-api-access-pkv8n\") pod \"dashboard-metrics-scraper-5f5c79dd8f-qvjjj\" (UID: \"ca82315e-15db-4b1b-a5b1-8697cd50e03a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f5c79dd8f-qvjjj"
Jan 14 10:22:50 functional-101929 kubelet[7815]: I0114 10:22:50.181753 7815 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f6e06a1c-11fe-4224-aa32-b29a20116240-tmp-volume\") pod \"kubernetes-dashboard-f87d45d87-2qxk5\" (UID: \"f6e06a1c-11fe-4224-aa32-b29a20116240\") " pod="kubernetes-dashboard/kubernetes-dashboard-f87d45d87-2qxk5"
Jan 14 10:22:50 functional-101929 kubelet[7815]: I0114 10:22:50.182006 7815 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ca82315e-15db-4b1b-a5b1-8697cd50e03a-tmp-volume\") pod \"dashboard-metrics-scraper-5f5c79dd8f-qvjjj\" (UID: \"ca82315e-15db-4b1b-a5b1-8697cd50e03a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f5c79dd8f-qvjjj"
Jan 14 10:22:51 functional-101929 kubelet[7815]: I0114 10:22:51.282159 7815 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="8ab6f34547d848a8c5c277663f96e24242ca90d5f651a13e2650b30bd6766a77"
*
* ==> storage-provisioner [57fd1ef65853] <==
* I0114 10:22:15.704086 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0114 10:22:15.719908 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0114 10:22:15.719962 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0114 10:22:33.121301 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0114 10:22:33.121452 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-101929_2e8e6ea1-4f97-4fdb-b546-e0223842f9b1!
I0114 10:22:33.123567 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1274b56a-b9dc-4a69-9faa-d5d7b21cd8f1", APIVersion:"v1", ResourceVersion:"606", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-101929_2e8e6ea1-4f97-4fdb-b546-e0223842f9b1 became leader
I0114 10:22:33.221885 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-101929_2e8e6ea1-4f97-4fdb-b546-e0223842f9b1!
I0114 10:22:39.455852 1 controller.go:1332] provision "default/myclaim" class "standard": started
I0114 10:22:39.455896 1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard 16f5c87b-7dd8-46fa-aec7-c11544d5ac2b 366 0 2023-01-14 10:20:39 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
storageclass.kubernetes.io/is-default-class:true] [] [] [{kubectl-client-side-apply Update storage.k8s.io/v1 2023-01-14 10:20:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-04455ae2-8b2e-481a-b409-9519898adf8f &PersistentVolumeClaim{ObjectMeta:{myclaim default 04455ae2-8b2e-481a-b409-9519898adf8f 657 0 2023-01-14 10:22:39 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection] [{kube-controller-manager Update v1 2023-01-14 10:22:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2023-01-14 10:22:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
I0114 10:22:39.456281 1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-04455ae2-8b2e-481a-b409-9519898adf8f" provisioned
I0114 10:22:39.456294 1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
I0114 10:22:39.456302 1 volume_store.go:212] Trying to save persistentvolume "pvc-04455ae2-8b2e-481a-b409-9519898adf8f"
I0114 10:22:39.458124 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"04455ae2-8b2e-481a-b409-9519898adf8f", APIVersion:"v1", ResourceVersion:"657", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
I0114 10:22:39.500526 1 volume_store.go:219] persistentvolume "pvc-04455ae2-8b2e-481a-b409-9519898adf8f" saved
I0114 10:22:39.500624 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"04455ae2-8b2e-481a-b409-9519898adf8f", APIVersion:"v1", ResourceVersion:"657", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-04455ae2-8b2e-481a-b409-9519898adf8f
*
* ==> storage-provisioner [a526f6daec05] <==
* I0114 10:21:16.243186 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0114 10:21:21.359613 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0114 10:21:21.359950 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0114 10:21:38.767181 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0114 10:21:38.767626 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-101929_72dc03ae-36b7-46e4-b44b-047bf21362e6!
I0114 10:21:38.768243 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1274b56a-b9dc-4a69-9faa-d5d7b21cd8f1", APIVersion:"v1", ResourceVersion:"513", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-101929_72dc03ae-36b7-46e4-b44b-047bf21362e6 became leader
I0114 10:21:38.868626 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-101929_72dc03ae-36b7-46e4-b44b-047bf21362e6!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-101929 -n functional-101929
=== CONT TestFunctional/parallel/DashboardCmd
helpers_test.go:261: (dbg) Run: kubectl --context functional-101929 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: busybox-mount mysql-596b7fcdbf-mphb5 dashboard-metrics-scraper-5f5c79dd8f-qvjjj kubernetes-dashboard-f87d45d87-2qxk5
helpers_test.go:272: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:275: (dbg) Run: kubectl --context functional-101929 describe pod busybox-mount mysql-596b7fcdbf-mphb5 dashboard-metrics-scraper-5f5c79dd8f-qvjjj kubernetes-dashboard-f87d45d87-2qxk5
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context functional-101929 describe pod busybox-mount mysql-596b7fcdbf-mphb5 dashboard-metrics-scraper-5f5c79dd8f-qvjjj kubernetes-dashboard-f87d45d87-2qxk5: exit status 1 (94.312942ms)
-- stdout --
Name: busybox-mount
Namespace: default
Priority: 0
Service Account: default
Node: functional-101929/192.168.39.97
Start Time: Sat, 14 Jan 2023 10:22:37 +0000
Labels: integration-test=busybox-mount
Annotations: <none>
Status: Succeeded
IP: 172.17.0.5
IPs:
IP: 172.17.0.5
Containers:
mount-munger:
Container ID: docker://4fe6f4288a44c734a242b9c9120e6c7f2c8665fdaff3e87a560f62a660dd2492
Image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID: docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
Port: <none>
Host Port: <none>
Command:
/bin/sh
-c
--
Args:
cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
State: Terminated
Reason: Completed
Exit Code: 0
Started: Sat, 14 Jan 2023 10:22:42 +0000
Finished: Sat, 14 Jan 2023 10:22:42 +0000
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/mount-9p from test-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kbd5m (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
test-volume:
Type: HostPath (bare host directory volume)
Path: /mount-9p
HostPathType:
kube-api-access-kbd5m:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 15s default-scheduler Successfully assigned default/busybox-mount to functional-101929
Normal Pulling 15s kubelet Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Normal Pulled 11s kubelet Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 3.802070875s
Normal Created 11s kubelet Created container mount-munger
Normal Started 11s kubelet Started container mount-munger
Name: mysql-596b7fcdbf-mphb5
Namespace: default
Priority: 0
Service Account: default
Node: functional-101929/192.168.39.97
Start Time: Sat, 14 Jan 2023 10:22:49 +0000
Labels: app=mysql
pod-template-hash=596b7fcdbf
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/mysql-596b7fcdbf
Containers:
mysql:
Container ID:
Image: mysql:5.7
Image ID:
Port: 3306/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Limits:
cpu: 700m
memory: 700Mi
Requests:
cpu: 600m
memory: 512Mi
Environment:
MYSQL_ROOT_PASSWORD: password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pq992 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-pq992:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3s default-scheduler Successfully assigned default/mysql-596b7fcdbf-mphb5 to functional-101929
Normal Pulling 2s kubelet Pulling image "mysql:5.7"
-- /stdout --
** stderr **
Error from server (NotFound): pods "dashboard-metrics-scraper-5f5c79dd8f-qvjjj" not found
Error from server (NotFound): pods "kubernetes-dashboard-f87d45d87-2qxk5" not found
** /stderr **
helpers_test.go:277: kubectl --context functional-101929 describe pod busybox-mount mysql-596b7fcdbf-mphb5 dashboard-metrics-scraper-5f5c79dd8f-qvjjj kubernetes-dashboard-f87d45d87-2qxk5: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (4.74s)