=== RUN TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd
=== CONT TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-502505 --alsologtostderr -v=1]
functional_test.go:914: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-502505 --alsologtostderr -v=1] ...
functional_test.go:906: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-502505 --alsologtostderr -v=1] stdout:
functional_test.go:906: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-502505 --alsologtostderr -v=1] stderr:
I0703 04:30:43.639286 19291 out.go:291] Setting OutFile to fd 1 ...
I0703 04:30:43.639626 19291 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0703 04:30:43.639638 19291 out.go:304] Setting ErrFile to fd 2...
I0703 04:30:43.639644 19291 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0703 04:30:43.639887 19291 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19184-3680/.minikube/bin
I0703 04:30:43.640189 19291 mustload.go:65] Loading cluster: functional-502505
I0703 04:30:43.640641 19291 config.go:182] Loaded profile config "functional-502505": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
I0703 04:30:43.641238 19291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0703 04:30:43.641285 19291 main.go:141] libmachine: Launching plugin server for driver kvm2
I0703 04:30:43.655991 19291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38605
I0703 04:30:43.656418 19291 main.go:141] libmachine: () Calling .GetVersion
I0703 04:30:43.656939 19291 main.go:141] libmachine: Using API Version 1
I0703 04:30:43.656961 19291 main.go:141] libmachine: () Calling .SetConfigRaw
I0703 04:30:43.657290 19291 main.go:141] libmachine: () Calling .GetMachineName
I0703 04:30:43.657479 19291 main.go:141] libmachine: (functional-502505) Calling .GetState
I0703 04:30:43.658886 19291 host.go:66] Checking if "functional-502505" exists ...
I0703 04:30:43.659196 19291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0703 04:30:43.659242 19291 main.go:141] libmachine: Launching plugin server for driver kvm2
I0703 04:30:43.673621 19291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37199
I0703 04:30:43.674026 19291 main.go:141] libmachine: () Calling .GetVersion
I0703 04:30:43.674424 19291 main.go:141] libmachine: Using API Version 1
I0703 04:30:43.674447 19291 main.go:141] libmachine: () Calling .SetConfigRaw
I0703 04:30:43.674775 19291 main.go:141] libmachine: () Calling .GetMachineName
I0703 04:30:43.674987 19291 main.go:141] libmachine: (functional-502505) Calling .DriverName
I0703 04:30:43.675126 19291 api_server.go:166] Checking apiserver status ...
I0703 04:30:43.675204 19291 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0703 04:30:43.675243 19291 main.go:141] libmachine: (functional-502505) Calling .GetSSHHostname
I0703 04:30:43.677793 19291 main.go:141] libmachine: (functional-502505) DBG | domain functional-502505 has defined MAC address 52:54:00:5b:3d:1d in network mk-functional-502505
I0703 04:30:43.678216 19291 main.go:141] libmachine: (functional-502505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:3d:1d", ip: ""} in network mk-functional-502505: {Iface:virbr1 ExpiryTime:2024-07-03 05:27:57 +0000 UTC Type:0 Mac:52:54:00:5b:3d:1d Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:functional-502505 Clientid:01:52:54:00:5b:3d:1d}
I0703 04:30:43.678249 19291 main.go:141] libmachine: (functional-502505) DBG | domain functional-502505 has defined IP address 192.168.39.7 and MAC address 52:54:00:5b:3d:1d in network mk-functional-502505
I0703 04:30:43.678407 19291 main.go:141] libmachine: (functional-502505) Calling .GetSSHPort
I0703 04:30:43.678578 19291 main.go:141] libmachine: (functional-502505) Calling .GetSSHKeyPath
I0703 04:30:43.678741 19291 main.go:141] libmachine: (functional-502505) Calling .GetSSHUsername
I0703 04:30:43.678900 19291 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19184-3680/.minikube/machines/functional-502505/id_rsa Username:docker}
I0703 04:30:43.768679 19291 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4157/cgroup
W0703 04:30:43.778252 19291 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4157/cgroup: Process exited with status 1
stdout:
stderr:
I0703 04:30:43.778324 19291 ssh_runner.go:195] Run: ls
I0703 04:30:43.783015 19291 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8441/healthz ...
I0703 04:30:43.787107 19291 api_server.go:279] https://192.168.39.7:8441/healthz returned 200:
ok
W0703 04:30:43.787144 19291 out.go:239] * Enabling dashboard ...
* Enabling dashboard ...
I0703 04:30:43.787292 19291 config.go:182] Loaded profile config "functional-502505": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
I0703 04:30:43.787308 19291 addons.go:69] Setting dashboard=true in profile "functional-502505"
I0703 04:30:43.787318 19291 addons.go:234] Setting addon dashboard=true in "functional-502505"
I0703 04:30:43.787348 19291 host.go:66] Checking if "functional-502505" exists ...
I0703 04:30:43.787648 19291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0703 04:30:43.787688 19291 main.go:141] libmachine: Launching plugin server for driver kvm2
I0703 04:30:43.802431 19291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37793
I0703 04:30:43.802792 19291 main.go:141] libmachine: () Calling .GetVersion
I0703 04:30:43.803277 19291 main.go:141] libmachine: Using API Version 1
I0703 04:30:43.803304 19291 main.go:141] libmachine: () Calling .SetConfigRaw
I0703 04:30:43.803623 19291 main.go:141] libmachine: () Calling .GetMachineName
I0703 04:30:43.804047 19291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0703 04:30:43.804082 19291 main.go:141] libmachine: Launching plugin server for driver kvm2
I0703 04:30:43.818507 19291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41567
I0703 04:30:43.818870 19291 main.go:141] libmachine: () Calling .GetVersion
I0703 04:30:43.819281 19291 main.go:141] libmachine: Using API Version 1
I0703 04:30:43.819303 19291 main.go:141] libmachine: () Calling .SetConfigRaw
I0703 04:30:43.819616 19291 main.go:141] libmachine: () Calling .GetMachineName
I0703 04:30:43.819784 19291 main.go:141] libmachine: (functional-502505) Calling .GetState
I0703 04:30:43.821200 19291 main.go:141] libmachine: (functional-502505) Calling .DriverName
I0703 04:30:43.823567 19291 out.go:177] - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I0703 04:30:43.825140 19291 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0703 04:30:43.826448 19291 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0703 04:30:43.826462 19291 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0703 04:30:43.826479 19291 main.go:141] libmachine: (functional-502505) Calling .GetSSHHostname
I0703 04:30:43.828874 19291 main.go:141] libmachine: (functional-502505) DBG | domain functional-502505 has defined MAC address 52:54:00:5b:3d:1d in network mk-functional-502505
I0703 04:30:43.829199 19291 main.go:141] libmachine: (functional-502505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:3d:1d", ip: ""} in network mk-functional-502505: {Iface:virbr1 ExpiryTime:2024-07-03 05:27:57 +0000 UTC Type:0 Mac:52:54:00:5b:3d:1d Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:functional-502505 Clientid:01:52:54:00:5b:3d:1d}
I0703 04:30:43.829226 19291 main.go:141] libmachine: (functional-502505) DBG | domain functional-502505 has defined IP address 192.168.39.7 and MAC address 52:54:00:5b:3d:1d in network mk-functional-502505
I0703 04:30:43.829328 19291 main.go:141] libmachine: (functional-502505) Calling .GetSSHPort
I0703 04:30:43.829491 19291 main.go:141] libmachine: (functional-502505) Calling .GetSSHKeyPath
I0703 04:30:43.829616 19291 main.go:141] libmachine: (functional-502505) Calling .GetSSHUsername
I0703 04:30:43.829749 19291 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19184-3680/.minikube/machines/functional-502505/id_rsa Username:docker}
I0703 04:30:43.960880 19291 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0703 04:30:43.960908 19291 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0703 04:30:43.998156 19291 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0703 04:30:43.998187 19291 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0703 04:30:44.022167 19291 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0703 04:30:44.022191 19291 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0703 04:30:44.040119 19291 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0703 04:30:44.040144 19291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I0703 04:30:44.057560 19291 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
I0703 04:30:44.057586 19291 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0703 04:30:44.075572 19291 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0703 04:30:44.075596 19291 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0703 04:30:44.093339 19291 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0703 04:30:44.093364 19291 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0703 04:30:44.111881 19291 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0703 04:30:44.111902 19291 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0703 04:30:44.129647 19291 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0703 04:30:44.129670 19291 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0703 04:30:44.146537 19291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0703 04:30:45.803783 19291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.657173748s)
I0703 04:30:45.803903 19291 main.go:141] libmachine: Making call to close driver server
I0703 04:30:45.803929 19291 main.go:141] libmachine: (functional-502505) Calling .Close
I0703 04:30:45.804193 19291 main.go:141] libmachine: Successfully made call to close driver server
I0703 04:30:45.804216 19291 main.go:141] libmachine: Making call to close connection to plugin binary
I0703 04:30:45.804225 19291 main.go:141] libmachine: Making call to close driver server
I0703 04:30:45.804233 19291 main.go:141] libmachine: (functional-502505) Calling .Close
I0703 04:30:45.804506 19291 main.go:141] libmachine: Successfully made call to close driver server
I0703 04:30:45.804510 19291 main.go:141] libmachine: (functional-502505) DBG | Closing plugin on server side
I0703 04:30:45.804523 19291 main.go:141] libmachine: Making call to close connection to plugin binary
I0703 04:30:45.806334 19291 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p functional-502505 addons enable metrics-server
I0703 04:30:45.808113 19291 addons.go:197] Writing out "functional-502505" config to set dashboard=true...
W0703 04:30:45.808347 19291 out.go:239] * Verifying dashboard health ...
* Verifying dashboard health ...
I0703 04:30:45.809208 19291 kapi.go:59] client config for functional-502505: &rest.Config{Host:"https://192.168.39.7:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19184-3680/.minikube/profiles/functional-502505/client.crt", KeyFile:"/home/jenkins/minikube-integration/19184-3680/.minikube/profiles/functional-502505/client.key", CAFile:"/home/jenkins/minikube-integration/19184-3680/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfc5a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0703 04:30:45.830220 19291 service.go:214] Found service: &Service{ObjectMeta:{kubernetes-dashboard kubernetes-dashboard f516666f-9709-4c64-886e-e779c2a2620c 818 0 2024-07-03 04:30:45 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2024-07-03 04:30:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.100.128.190,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.100.128.190],IPFamilies:[IPv4],AllocateLoadBalan
cerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W0703 04:30:45.830338 19291 out.go:239] * Launching proxy ...
* Launching proxy ...
I0703 04:30:45.830398 19291 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-502505 proxy --port 36195]
I0703 04:30:45.830639 19291 dashboard.go:157] Waiting for kubectl to output host:port ...
I0703 04:30:45.894919 19291 out.go:177]
W0703 04:30:45.896434 19291 out.go:239] X Exiting due to HOST_KUBECTL_PROXY: kubectl proxy: readByteWithTimeout: EOF
X Exiting due to HOST_KUBECTL_PROXY: kubectl proxy: readByteWithTimeout: EOF
W0703 04:30:45.896453 19291 out.go:239] *
*
W0703 04:30:45.899394 19291 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ * Please also attach the following file to the GitHub issue: │
│ * - /tmp/minikube_dashboard_2f9e80c8c4dc47927ad6915561a20c5705c3b3b4_0.log │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ * Please also attach the following file to the GitHub issue: │
│ * - /tmp/minikube_dashboard_2f9e80c8c4dc47927ad6915561a20c5705c3b3b4_0.log │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0703 04:30:45.900931 19291 out.go:177]
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p functional-502505 -n functional-502505
helpers_test.go:244: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p functional-502505 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-502505 logs -n 25: (2.206268632s)
helpers_test.go:252: TestFunctional/parallel/DashboardCmd logs:
-- stdout --
==> Audit <==
|-----------|----------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|-----------|----------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
| service | functional-502505 service list | functional-502505 | jenkins | v1.33.1 | 03 Jul 24 04:30 UTC | 03 Jul 24 04:30 UTC |
| | -o json | | | | | |
| ssh | functional-502505 ssh findmnt | functional-502505 | jenkins | v1.33.1 | 03 Jul 24 04:30 UTC | 03 Jul 24 04:30 UTC |
| | -T /mount3 | | | | | |
| service | functional-502505 service | functional-502505 | jenkins | v1.33.1 | 03 Jul 24 04:30 UTC | 03 Jul 24 04:30 UTC |
| | --namespace=default --https | | | | | |
| | --url hello-node | | | | | |
| image | functional-502505 image load --daemon | functional-502505 | jenkins | v1.33.1 | 03 Jul 24 04:30 UTC | 03 Jul 24 04:30 UTC |
| | gcr.io/google-containers/addon-resizer:functional-502505 | | | | | |
| | --alsologtostderr | | | | | |
| mount | -p functional-502505 | functional-502505 | jenkins | v1.33.1 | 03 Jul 24 04:30 UTC | |
| | --kill=true | | | | | |
| ssh | functional-502505 ssh sudo cat | functional-502505 | jenkins | v1.33.1 | 03 Jul 24 04:30 UTC | 03 Jul 24 04:30 UTC |
| | /etc/test/nested/copy/10844/hosts | | | | | |
| service | functional-502505 | functional-502505 | jenkins | v1.33.1 | 03 Jul 24 04:30 UTC | 03 Jul 24 04:30 UTC |
| | service hello-node --url | | | | | |
| | --format={{.IP}} | | | | | |
| service | functional-502505 service | functional-502505 | jenkins | v1.33.1 | 03 Jul 24 04:30 UTC | 03 Jul 24 04:30 UTC |
| | hello-node --url | | | | | |
| ssh | functional-502505 ssh sudo cat | functional-502505 | jenkins | v1.33.1 | 03 Jul 24 04:30 UTC | 03 Jul 24 04:30 UTC |
| | /etc/ssl/certs/10844.pem | | | | | |
| ssh | functional-502505 ssh sudo cat | functional-502505 | jenkins | v1.33.1 | 03 Jul 24 04:30 UTC | 03 Jul 24 04:30 UTC |
| | /usr/share/ca-certificates/10844.pem | | | | | |
| ssh | functional-502505 ssh sudo cat | functional-502505 | jenkins | v1.33.1 | 03 Jul 24 04:30 UTC | 03 Jul 24 04:30 UTC |
| | /etc/ssl/certs/51391683.0 | | | | | |
| ssh | functional-502505 ssh sudo cat | functional-502505 | jenkins | v1.33.1 | 03 Jul 24 04:30 UTC | 03 Jul 24 04:30 UTC |
| | /etc/ssl/certs/108442.pem | | | | | |
| ssh | functional-502505 ssh sudo cat | functional-502505 | jenkins | v1.33.1 | 03 Jul 24 04:30 UTC | 03 Jul 24 04:30 UTC |
| | /usr/share/ca-certificates/108442.pem | | | | | |
| ssh | functional-502505 ssh sudo cat | functional-502505 | jenkins | v1.33.1 | 03 Jul 24 04:30 UTC | 03 Jul 24 04:30 UTC |
| | /etc/ssl/certs/3ec20f2e.0 | | | | | |
| cp | functional-502505 cp | functional-502505 | jenkins | v1.33.1 | 03 Jul 24 04:30 UTC | 03 Jul 24 04:30 UTC |
| | testdata/cp-test.txt | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | functional-502505 ssh -n | functional-502505 | jenkins | v1.33.1 | 03 Jul 24 04:30 UTC | 03 Jul 24 04:30 UTC |
| | functional-502505 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | functional-502505 cp | functional-502505 | jenkins | v1.33.1 | 03 Jul 24 04:30 UTC | 03 Jul 24 04:30 UTC |
| | functional-502505:/home/docker/cp-test.txt | | | | | |
| | /tmp/TestFunctionalparallelCpCmd55629243/001/cp-test.txt | | | | | |
| ssh | functional-502505 ssh -n | functional-502505 | jenkins | v1.33.1 | 03 Jul 24 04:30 UTC | 03 Jul 24 04:30 UTC |
| | functional-502505 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | functional-502505 cp | functional-502505 | jenkins | v1.33.1 | 03 Jul 24 04:30 UTC | 03 Jul 24 04:30 UTC |
| | testdata/cp-test.txt | | | | | |
| | /tmp/does/not/exist/cp-test.txt | | | | | |
| ssh | functional-502505 ssh -n | functional-502505 | jenkins | v1.33.1 | 03 Jul 24 04:30 UTC | 03 Jul 24 04:30 UTC |
| | functional-502505 sudo cat | | | | | |
| | /tmp/does/not/exist/cp-test.txt | | | | | |
| start | -p functional-502505 | functional-502505 | jenkins | v1.33.1 | 03 Jul 24 04:30 UTC | |
| | --dry-run --memory | | | | | |
| | 250MB --alsologtostderr | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| start | -p functional-502505 | functional-502505 | jenkins | v1.33.1 | 03 Jul 24 04:30 UTC | |
| | --dry-run --alsologtostderr | | | | | |
| | -v=1 --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| dashboard | --url --port 36195 | functional-502505 | jenkins | v1.33.1 | 03 Jul 24 04:30 UTC | |
| | -p functional-502505 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| image | functional-502505 image ls | functional-502505 | jenkins | v1.33.1 | 03 Jul 24 04:30 UTC | 03 Jul 24 04:30 UTC |
| image | functional-502505 image load --daemon | functional-502505 | jenkins | v1.33.1 | 03 Jul 24 04:30 UTC | |
| | gcr.io/google-containers/addon-resizer:functional-502505 | | | | | |
| | --alsologtostderr | | | | | |
|-----------|----------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/07/03 04:30:43
Running on machine: ubuntu-20-agent-11
Binary: Built with gc go1.22.4 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0703 04:30:43.513038 19263 out.go:291] Setting OutFile to fd 1 ...
I0703 04:30:43.513195 19263 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0703 04:30:43.513208 19263 out.go:304] Setting ErrFile to fd 2...
I0703 04:30:43.513215 19263 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0703 04:30:43.513414 19263 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19184-3680/.minikube/bin
I0703 04:30:43.514034 19263 out.go:298] Setting JSON to false
I0703 04:30:43.515070 19263 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":787,"bootTime":1719980256,"procs":285,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0703 04:30:43.515135 19263 start.go:139] virtualization: kvm guest
I0703 04:30:43.517290 19263 out.go:177] * [functional-502505] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
I0703 04:30:43.518637 19263 notify.go:220] Checking for updates...
I0703 04:30:43.518664 19263 out.go:177] - MINIKUBE_LOCATION=19184
I0703 04:30:43.519943 19263 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0703 04:30:43.521178 19263 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19184-3680/kubeconfig
I0703 04:30:43.522406 19263 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19184-3680/.minikube
I0703 04:30:43.523681 19263 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0703 04:30:43.525040 19263 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0703 04:30:43.526817 19263 config.go:182] Loaded profile config "functional-502505": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
I0703 04:30:43.527224 19263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0703 04:30:43.527274 19263 main.go:141] libmachine: Launching plugin server for driver kvm2
I0703 04:30:43.542481 19263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39753
I0703 04:30:43.542833 19263 main.go:141] libmachine: () Calling .GetVersion
I0703 04:30:43.543341 19263 main.go:141] libmachine: Using API Version 1
I0703 04:30:43.543360 19263 main.go:141] libmachine: () Calling .SetConfigRaw
I0703 04:30:43.543757 19263 main.go:141] libmachine: () Calling .GetMachineName
I0703 04:30:43.544015 19263 main.go:141] libmachine: (functional-502505) Calling .DriverName
I0703 04:30:43.544260 19263 driver.go:392] Setting default libvirt URI to qemu:///system
I0703 04:30:43.544546 19263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0703 04:30:43.544582 19263 main.go:141] libmachine: Launching plugin server for driver kvm2
I0703 04:30:43.559258 19263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36551
I0703 04:30:43.559682 19263 main.go:141] libmachine: () Calling .GetVersion
I0703 04:30:43.560185 19263 main.go:141] libmachine: Using API Version 1
I0703 04:30:43.560204 19263 main.go:141] libmachine: () Calling .SetConfigRaw
I0703 04:30:43.560490 19263 main.go:141] libmachine: () Calling .GetMachineName
I0703 04:30:43.560671 19263 main.go:141] libmachine: (functional-502505) Calling .DriverName
I0703 04:30:43.593146 19263 out.go:177] * Using the kvm2 driver based on existing profile
I0703 04:30:43.594341 19263 start.go:297] selected driver: kvm2
I0703 04:30:43.594356 19263 start.go:901] validating driver "kvm2" against &{Name:functional-502505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:functional-502505 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0703 04:30:43.594473 19263 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0703 04:30:43.595597 19263 cni.go:84] Creating CNI manager for ""
I0703 04:30:43.595614 19263 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0703 04:30:43.595657 19263 start.go:340] cluster config:
{Name:functional-502505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:functional-502505 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0703 04:30:43.597290 19263 out.go:177] * dry-run validation complete!
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
4fca3c48aa9ba fffffc90d343c 5 seconds ago Running myfrontend 0 1b62f8db77069 sp-pod
82e5f2ae5bea8 82e4c8a736a4f 14 seconds ago Running echoserver 0 32c4ed3dda52b hello-node-6d85cfcfd8-cwf9k
3f7350c99caad 56cc512116c8f 14 seconds ago Exited mount-munger 0 c3a607b98ddc1 busybox-mount
b90969fb0f122 82e4c8a736a4f 17 seconds ago Running echoserver 0 b663a389fedbb hello-node-connect-57b4589c47-7njdl
0481ac4db61f9 cbb01a7bd410d 46 seconds ago Running coredns 2 37129e09f8ea0 coredns-7db6d8ff4d-fts6d
2d75fb5db730d 53c535741fb44 46 seconds ago Running kube-proxy 2 d4ce837288fa2 kube-proxy-spsjc
4fa84086025c8 6e38f40d628db 46 seconds ago Running storage-provisioner 4 44694fff18b49 storage-provisioner
6ffa7f8d09cb6 56ce0fd9fb532 50 seconds ago Running kube-apiserver 0 2906ff872c5fe kube-apiserver-functional-502505
df86e2c18a48b 7820c83aa1394 50 seconds ago Running kube-scheduler 2 bd44e5af5bfaa kube-scheduler-functional-502505
b97699681e706 e874818b3caac 50 seconds ago Running kube-controller-manager 2 90c2db6730e7b kube-controller-manager-functional-502505
09bfde035a632 3861cfcd7c04c 50 seconds ago Running etcd 2 838b26db56dee etcd-functional-502505
e10f91f31df27 6e38f40d628db 53 seconds ago Exited storage-provisioner 3 44694fff18b49 storage-provisioner
a6b795c693f2e e874818b3caac About a minute ago Exited kube-controller-manager 1 90c2db6730e7b kube-controller-manager-functional-502505
8c920df2e33c1 3861cfcd7c04c About a minute ago Exited etcd 1 838b26db56dee etcd-functional-502505
c8e41c6173772 7820c83aa1394 About a minute ago Exited kube-scheduler 1 bd44e5af5bfaa kube-scheduler-functional-502505
11b5748a2e821 cbb01a7bd410d About a minute ago Exited coredns 1 37129e09f8ea0 coredns-7db6d8ff4d-fts6d
8ffcd519e3130 53c535741fb44 About a minute ago Exited kube-proxy 1 d4ce837288fa2 kube-proxy-spsjc
==> containerd <==
Jul 03 04:30:42 functional-502505 containerd[3376]: time="2024-07-03T04:30:42.133912286Z" level=info msg="ImageCreate event name:\"docker.io/library/nginx@sha256:67682bda769fae1ccf5183192b8daf37b64cae99c6c3302650f6f8bf5f0f95df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jul 03 04:30:42 functional-502505 containerd[3376]: time="2024-07-03T04:30:42.135757622Z" level=info msg="Pulled image \"docker.io/nginx:latest\" with image id \"sha256:fffffc90d343cbcb01a5032edac86db5998c536cd0a366514121a45c6723765c\", repo tag \"docker.io/library/nginx:latest\", repo digest \"docker.io/library/nginx@sha256:67682bda769fae1ccf5183192b8daf37b64cae99c6c3302650f6f8bf5f0f95df\", size \"70984068\" in 6.240902466s"
Jul 03 04:30:42 functional-502505 containerd[3376]: time="2024-07-03T04:30:42.135831946Z" level=info msg="PullImage \"docker.io/nginx:latest\" returns image reference \"sha256:fffffc90d343cbcb01a5032edac86db5998c536cd0a366514121a45c6723765c\""
Jul 03 04:30:42 functional-502505 containerd[3376]: time="2024-07-03T04:30:42.141773130Z" level=info msg="PullImage \"docker.io/mysql:5.7\""
Jul 03 04:30:42 functional-502505 containerd[3376]: time="2024-07-03T04:30:42.144195488Z" level=info msg="CreateContainer within sandbox \"1b62f8db7706904cfbcd95f96cbb76cf1d1a8175002340b9efafefe6ca7fd6ec\" for container &ContainerMetadata{Name:myfrontend,Attempt:0,}"
Jul 03 04:30:42 functional-502505 containerd[3376]: time="2024-07-03T04:30:42.146469735Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
Jul 03 04:30:42 functional-502505 containerd[3376]: time="2024-07-03T04:30:42.175051855Z" level=info msg="CreateContainer within sandbox \"1b62f8db7706904cfbcd95f96cbb76cf1d1a8175002340b9efafefe6ca7fd6ec\" for &ContainerMetadata{Name:myfrontend,Attempt:0,} returns container id \"4fca3c48aa9ba87bb9bc9fb80fbeb7df27f94b46b716ffa816ca93108e93a50c\""
Jul 03 04:30:42 functional-502505 containerd[3376]: time="2024-07-03T04:30:42.175880096Z" level=info msg="StartContainer for \"4fca3c48aa9ba87bb9bc9fb80fbeb7df27f94b46b716ffa816ca93108e93a50c\""
Jul 03 04:30:42 functional-502505 containerd[3376]: time="2024-07-03T04:30:42.289807814Z" level=info msg="StartContainer for \"4fca3c48aa9ba87bb9bc9fb80fbeb7df27f94b46b716ffa816ca93108e93a50c\" returns successfully"
Jul 03 04:30:43 functional-502505 containerd[3376]: time="2024-07-03T04:30:43.044244039Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
Jul 03 04:30:43 functional-502505 containerd[3376]: time="2024-07-03T04:30:43.373937965Z" level=info msg="ImageCreate event name:\"gcr.io/google-containers/addon-resizer:functional-502505\""
Jul 03 04:30:43 functional-502505 containerd[3376]: time="2024-07-03T04:30:43.381286200Z" level=info msg="ImageCreate event name:\"sha256:b08046378d77c9dfdab5fbe738244949bc9d487d7b394813b7209ff1f43b82cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jul 03 04:30:43 functional-502505 containerd[3376]: time="2024-07-03T04:30:43.382371336Z" level=info msg="ImageUpdate event name:\"gcr.io/google-containers/addon-resizer:functional-502505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jul 03 04:30:46 functional-502505 containerd[3376]: time="2024-07-03T04:30:46.596788326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kubernetes-dashboard-779776cb65-mnh76,Uid:e6761bd4-0a9c-40f0-bc6e-b6455a5a7b9c,Namespace:kubernetes-dashboard,Attempt:0,}"
Jul 03 04:30:46 functional-502505 containerd[3376]: time="2024-07-03T04:30:46.616016342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:dashboard-metrics-scraper-b5fc48f67-vktg7,Uid:ceb7d87e-e07a-4c85-b378-65b5ef7814a9,Namespace:kubernetes-dashboard,Attempt:0,}"
Jul 03 04:30:46 functional-502505 containerd[3376]: time="2024-07-03T04:30:46.925064993Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul 03 04:30:46 functional-502505 containerd[3376]: time="2024-07-03T04:30:46.925136555Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul 03 04:30:46 functional-502505 containerd[3376]: time="2024-07-03T04:30:46.925150819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 03 04:30:46 functional-502505 containerd[3376]: time="2024-07-03T04:30:46.925226826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 03 04:30:47 functional-502505 containerd[3376]: time="2024-07-03T04:30:47.068844286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kubernetes-dashboard-779776cb65-mnh76,Uid:e6761bd4-0a9c-40f0-bc6e-b6455a5a7b9c,Namespace:kubernetes-dashboard,Attempt:0,} returns sandbox id \"4801f71dffca7200da0678dd9d5fe2949693d55782526e5ad3578fed835669dc\""
Jul 03 04:30:47 functional-502505 containerd[3376]: time="2024-07-03T04:30:47.101002773Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul 03 04:30:47 functional-502505 containerd[3376]: time="2024-07-03T04:30:47.102176946Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul 03 04:30:47 functional-502505 containerd[3376]: time="2024-07-03T04:30:47.104511065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 03 04:30:47 functional-502505 containerd[3376]: time="2024-07-03T04:30:47.106380100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 03 04:30:47 functional-502505 containerd[3376]: time="2024-07-03T04:30:47.277157703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:dashboard-metrics-scraper-b5fc48f67-vktg7,Uid:ceb7d87e-e07a-4c85-b378-65b5ef7814a9,Namespace:kubernetes-dashboard,Attempt:0,} returns sandbox id \"a0de7d5672ea36a7ff1283f1f434ed9115282cd8c519494c032e3b051be02afb\""
==> coredns [0481ac4db61f9d01c91a73599ce0c4e3bebdcd27c1d03c7799b0c5c360530d84] <==
.:53
[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
CoreDNS-1.11.1
linux/amd64, go1.20.7, ae2bbc2
[INFO] 127.0.0.1:46565 - 64295 "HINFO IN 8235943687636550445.7495420946416754712. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014720995s
==> coredns [11b5748a2e821c10dc0c8d733cbbf50e6776a62dcdc3333fef860f3b5b959221] <==
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
CoreDNS-1.11.1
linux/amd64, go1.20.7, ae2bbc2
[INFO] 127.0.0.1:58748 - 16425 "HINFO IN 7703097324571266106.1887883732849068066. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014464809s
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
==> describe nodes <==
Name: functional-502505
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=functional-502505
kubernetes.io/os=linux
minikube.k8s.io/commit=6e34d4fd348f73f0f8af294cc2737aeb8da39e8d
minikube.k8s.io/name=functional-502505
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_07_03T04_28_24_0700
minikube.k8s.io/version=v1.33.1
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 03 Jul 2024 04:28:21 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: functional-502505
AcquireTime: <unset>
RenewTime: Wed, 03 Jul 2024 04:30:40 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 03 Jul 2024 04:29:59 +0000 Wed, 03 Jul 2024 04:28:19 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 03 Jul 2024 04:29:59 +0000 Wed, 03 Jul 2024 04:28:19 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 03 Jul 2024 04:29:59 +0000 Wed, 03 Jul 2024 04:28:19 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 03 Jul 2024 04:29:59 +0000 Wed, 03 Jul 2024 04:28:24 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.7
Hostname: functional-502505
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 3912780Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 3912780Ki
pods: 110
System Info:
Machine ID: 9345ffc03e304bb79c8a4a46fd9708fb
System UUID: 9345ffc0-3e30-4bb7-9c8a-4a46fd9708fb
Boot ID: fb3954ad-56a4-4777-b32f-e12c79ee1fd8
Kernel Version: 5.10.207
OS Image: Buildroot 2023.02.9
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.7.18
Kubelet Version: v1.30.2
Kube-Proxy Version: v1.30.2
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (13 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default hello-node-6d85cfcfd8-cwf9k 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 20s
default hello-node-connect-57b4589c47-7njdl 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 21s
default mysql-64454c8b5c-x9ns2 600m (30%!)(MISSING) 700m (35%!)(MISSING) 512Mi (13%!)(MISSING) 700Mi (18%!)(MISSING) 7s
default sp-pod 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 12s
kube-system coredns-7db6d8ff4d-fts6d 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (1%!)(MISSING) 170Mi (4%!)(MISSING) 2m10s
kube-system etcd-functional-502505 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (2%!)(MISSING) 0 (0%!)(MISSING) 2m24s
kube-system kube-apiserver-functional-502505 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 47s
kube-system kube-controller-manager-functional-502505 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m24s
kube-system kube-proxy-spsjc 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m10s
kube-system kube-scheduler-functional-502505 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m24s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m8s
kubernetes-dashboard dashboard-metrics-scraper-b5fc48f67-vktg7 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2s
kubernetes-dashboard kubernetes-dashboard-779776cb65-mnh76 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 1350m (67%!)(MISSING) 700m (35%!)(MISSING)
memory 682Mi (17%!)(MISSING) 870Mi (22%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 2m8s kube-proxy
Normal Starting 46s kube-proxy
Normal Starting 98s kube-proxy
Normal NodeHasSufficientMemory 2m30s (x8 over 2m30s) kubelet Node functional-502505 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 2m30s (x8 over 2m30s) kubelet Node functional-502505 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2m30s (x7 over 2m30s) kubelet Node functional-502505 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 2m30s kubelet Updated Node Allocatable limit across pods
Normal Starting 2m24s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 2m24s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 2m24s kubelet Node functional-502505 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 2m24s kubelet Node functional-502505 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2m24s kubelet Node functional-502505 status is now: NodeHasSufficientPID
Normal NodeReady 2m23s kubelet Node functional-502505 status is now: NodeReady
Normal RegisteredNode 2m10s node-controller Node functional-502505 event: Registered Node functional-502505 in Controller
Normal Starting 103s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 103s (x8 over 103s) kubelet Node functional-502505 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 103s (x8 over 103s) kubelet Node functional-502505 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 103s (x7 over 103s) kubelet Node functional-502505 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 103s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 87s node-controller Node functional-502505 event: Registered Node functional-502505 in Controller
Normal NodeHasNoDiskPressure 51s (x8 over 51s) kubelet Node functional-502505 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientMemory 51s (x8 over 51s) kubelet Node functional-502505 status is now: NodeHasSufficientMemory
Normal Starting 51s kubelet Starting kubelet.
Normal NodeHasSufficientPID 51s (x7 over 51s) kubelet Node functional-502505 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 51s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 34s node-controller Node functional-502505 event: Registered Node functional-502505 in Controller
==> dmesg <==
[ +0.159765] systemd-fstab-generator[2002]: Ignoring "noauto" option for root device
[ +0.329422] systemd-fstab-generator[2031]: Ignoring "noauto" option for root device
[ +1.854123] systemd-fstab-generator[2189]: Ignoring "noauto" option for root device
[ +5.816165] kauditd_printk_skb: 122 callbacks suppressed
[Jul 3 04:29] kauditd_printk_skb: 9 callbacks suppressed
[ +1.598418] systemd-fstab-generator[2690]: Ignoring "noauto" option for root device
[ +4.573552] kauditd_printk_skb: 36 callbacks suppressed
[ +15.140776] systemd-fstab-generator[3003]: Ignoring "noauto" option for root device
[ +12.483347] systemd-fstab-generator[3301]: Ignoring "noauto" option for root device
[ +0.086376] kauditd_printk_skb: 12 callbacks suppressed
[ +0.070087] systemd-fstab-generator[3313]: Ignoring "noauto" option for root device
[ +0.161981] systemd-fstab-generator[3327]: Ignoring "noauto" option for root device
[ +0.137117] systemd-fstab-generator[3339]: Ignoring "noauto" option for root device
[ +0.298061] systemd-fstab-generator[3368]: Ignoring "noauto" option for root device
[ +1.943183] systemd-fstab-generator[3529]: Ignoring "noauto" option for root device
[ +11.091048] kauditd_printk_skb: 126 callbacks suppressed
[ +6.368397] systemd-fstab-generator[3952]: Ignoring "noauto" option for root device
[Jul 3 04:30] kauditd_printk_skb: 39 callbacks suppressed
[ +14.969257] systemd-fstab-generator[4407]: Ignoring "noauto" option for root device
[ +0.082234] kauditd_printk_skb: 8 callbacks suppressed
[ +6.123777] kauditd_printk_skb: 12 callbacks suppressed
[ +5.381685] kauditd_printk_skb: 21 callbacks suppressed
[ +5.478942] kauditd_printk_skb: 37 callbacks suppressed
[ +7.629937] kauditd_printk_skb: 22 callbacks suppressed
[ +5.473772] kauditd_printk_skb: 11 callbacks suppressed
==> etcd [09bfde035a6322616aa99ea4e7d3e6737f116467c03f7e48d7e9fe84a2ca512b] <==
{"level":"info","ts":"2024-07-03T04:29:58.608978Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: bb39151d8411994b elected leader bb39151d8411994b at term 4"}
{"level":"info","ts":"2024-07-03T04:29:58.614048Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"bb39151d8411994b","local-member-attributes":"{Name:functional-502505 ClientURLs:[https://192.168.39.7:2379]}","request-path":"/0/members/bb39151d8411994b/attributes","cluster-id":"3202df3d6e5aadcb","publish-timeout":"7s"}
{"level":"info","ts":"2024-07-03T04:29:58.614223Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-07-03T04:29:58.616172Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2024-07-03T04:29:58.61856Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-07-03T04:29:58.618834Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2024-07-03T04:29:58.618864Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2024-07-03T04:29:58.622195Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.7:2379"}
{"level":"warn","ts":"2024-07-03T04:30:40.358589Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.919616ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/hello-node-connect\" ","response":"range_response_count:1 size:655"}
{"level":"info","ts":"2024-07-03T04:30:40.358996Z","caller":"traceutil/trace.go:171","msg":"trace[453369321] range","detail":"{range_begin:/registry/services/endpoints/default/hello-node-connect; range_end:; response_count:1; response_revision:737; }","duration":"116.377532ms","start":"2024-07-03T04:30:40.242599Z","end":"2024-07-03T04:30:40.358976Z","steps":["trace[453369321] 'agreement among raft nodes before linearized reading' (duration: 115.694157ms)"],"step_count":1}
{"level":"info","ts":"2024-07-03T04:30:40.359296Z","caller":"traceutil/trace.go:171","msg":"trace[1858571883] transaction","detail":"{read_only:false; response_revision:737; number_of_response:1; }","duration":"119.227809ms","start":"2024-07-03T04:30:40.240055Z","end":"2024-07-03T04:30:40.359283Z","steps":["trace[1858571883] 'process raft request' (duration: 108.665914ms)"],"step_count":1}
{"level":"info","ts":"2024-07-03T04:30:40.3597Z","caller":"traceutil/trace.go:171","msg":"trace[520494723] linearizableReadLoop","detail":"{readStateIndex:801; appliedIndex:800; }","duration":"111.728249ms","start":"2024-07-03T04:30:40.242625Z","end":"2024-07-03T04:30:40.354353Z","steps":["trace[520494723] 'read index received' (duration: 105.980669ms)","trace[520494723] 'applied index is now lower than readState.Index' (duration: 5.74522ms)"],"step_count":2}
{"level":"info","ts":"2024-07-03T04:30:45.059988Z","caller":"traceutil/trace.go:171","msg":"trace[34802432] transaction","detail":"{read_only:false; response_revision:780; number_of_response:1; }","duration":"106.467116ms","start":"2024-07-03T04:30:44.953503Z","end":"2024-07-03T04:30:45.059971Z","steps":["trace[34802432] 'process raft request' (duration: 85.951244ms)","trace[34802432] 'compare' (duration: 20.438802ms)"],"step_count":2}
{"level":"info","ts":"2024-07-03T04:30:45.060662Z","caller":"traceutil/trace.go:171","msg":"trace[257287578] transaction","detail":"{read_only:false; response_revision:781; number_of_response:1; }","duration":"100.568337ms","start":"2024-07-03T04:30:44.960085Z","end":"2024-07-03T04:30:45.060653Z","steps":["trace[257287578] 'process raft request' (duration: 100.313228ms)"],"step_count":1}
{"level":"warn","ts":"2024-07-03T04:30:45.064979Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.297327ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kubernetes-dashboard/kubernetes-dashboard\" ","response":"range_response_count:1 size:4584"}
{"level":"info","ts":"2024-07-03T04:30:45.065046Z","caller":"traceutil/trace.go:171","msg":"trace[1604445107] range","detail":"{range_begin:/registry/deployments/kubernetes-dashboard/kubernetes-dashboard; range_end:; response_count:1; response_revision:783; }","duration":"101.396098ms","start":"2024-07-03T04:30:44.963637Z","end":"2024-07-03T04:30:45.065033Z","steps":["trace[1604445107] 'agreement among raft nodes before linearized reading' (duration: 101.240555ms)"],"step_count":1}
{"level":"info","ts":"2024-07-03T04:30:45.065158Z","caller":"traceutil/trace.go:171","msg":"trace[633190272] transaction","detail":"{read_only:false; response_revision:782; number_of_response:1; }","duration":"105.019503ms","start":"2024-07-03T04:30:44.960133Z","end":"2024-07-03T04:30:45.065152Z","steps":["trace[633190272] 'process raft request' (duration: 100.364237ms)"],"step_count":1}
{"level":"warn","ts":"2024-07-03T04:30:45.090032Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.091343ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/rolebindings/kubernetes-dashboard/kubernetes-dashboard\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2024-07-03T04:30:45.091945Z","caller":"traceutil/trace.go:171","msg":"trace[1414394451] range","detail":"{range_begin:/registry/rolebindings/kubernetes-dashboard/kubernetes-dashboard; range_end:; response_count:0; response_revision:783; }","duration":"122.036209ms","start":"2024-07-03T04:30:44.969893Z","end":"2024-07-03T04:30:45.09193Z","steps":["trace[1414394451] 'agreement among raft nodes before linearized reading' (duration: 119.47874ms)"],"step_count":1}
{"level":"info","ts":"2024-07-03T04:30:45.339947Z","caller":"traceutil/trace.go:171","msg":"trace[333853410] transaction","detail":"{read_only:false; response_revision:792; number_of_response:1; }","duration":"114.409227ms","start":"2024-07-03T04:30:45.225521Z","end":"2024-07-03T04:30:45.33993Z","steps":["trace[333853410] 'process raft request' (duration: 107.04566ms)"],"step_count":1}
{"level":"info","ts":"2024-07-03T04:30:45.339935Z","caller":"traceutil/trace.go:171","msg":"trace[654767916] linearizableReadLoop","detail":"{readStateIndex:857; appliedIndex:856; }","duration":"102.913998ms","start":"2024-07-03T04:30:45.237Z","end":"2024-07-03T04:30:45.339914Z","steps":["trace[654767916] 'read index received' (duration: 95.529893ms)","trace[654767916] 'applied index is now lower than readState.Index' (duration: 7.383229ms)"],"step_count":2}
{"level":"warn","ts":"2024-07-03T04:30:45.340147Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.127806ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kubernetes-dashboard\" ","response":"range_response_count:1 size:897"}
{"level":"info","ts":"2024-07-03T04:30:45.340182Z","caller":"traceutil/trace.go:171","msg":"trace[148790479] range","detail":"{range_begin:/registry/namespaces/kubernetes-dashboard; range_end:; response_count:1; response_revision:792; }","duration":"103.202894ms","start":"2024-07-03T04:30:45.23697Z","end":"2024-07-03T04:30:45.340173Z","steps":["trace[148790479] 'agreement among raft nodes before linearized reading' (duration: 103.003527ms)"],"step_count":1}
{"level":"info","ts":"2024-07-03T04:30:45.341658Z","caller":"traceutil/trace.go:171","msg":"trace[199208681] transaction","detail":"{read_only:false; response_revision:794; number_of_response:1; }","duration":"103.462782ms","start":"2024-07-03T04:30:45.238186Z","end":"2024-07-03T04:30:45.341649Z","steps":["trace[199208681] 'process raft request' (duration: 103.426321ms)"],"step_count":1}
{"level":"info","ts":"2024-07-03T04:30:45.341952Z","caller":"traceutil/trace.go:171","msg":"trace[372107416] transaction","detail":"{read_only:false; response_revision:793; number_of_response:1; }","duration":"104.799118ms","start":"2024-07-03T04:30:45.237135Z","end":"2024-07-03T04:30:45.341934Z","steps":["trace[372107416] 'process raft request' (duration: 104.415622ms)"],"step_count":1}
==> etcd [8c920df2e33c1a890b3c38828cc235ecd658e4df72447433c5e4733ba69c3c67] <==
{"level":"info","ts":"2024-07-03T04:29:05.234011Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.7:2380"}
{"level":"info","ts":"2024-07-03T04:29:06.748596Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bb39151d8411994b is starting a new election at term 2"}
{"level":"info","ts":"2024-07-03T04:29:06.748838Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bb39151d8411994b became pre-candidate at term 2"}
{"level":"info","ts":"2024-07-03T04:29:06.749078Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bb39151d8411994b received MsgPreVoteResp from bb39151d8411994b at term 2"}
{"level":"info","ts":"2024-07-03T04:29:06.749281Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bb39151d8411994b became candidate at term 3"}
{"level":"info","ts":"2024-07-03T04:29:06.749485Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bb39151d8411994b received MsgVoteResp from bb39151d8411994b at term 3"}
{"level":"info","ts":"2024-07-03T04:29:06.749589Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bb39151d8411994b became leader at term 3"}
{"level":"info","ts":"2024-07-03T04:29:06.74971Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: bb39151d8411994b elected leader bb39151d8411994b at term 3"}
{"level":"info","ts":"2024-07-03T04:29:06.756057Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"bb39151d8411994b","local-member-attributes":"{Name:functional-502505 ClientURLs:[https://192.168.39.7:2379]}","request-path":"/0/members/bb39151d8411994b/attributes","cluster-id":"3202df3d6e5aadcb","publish-timeout":"7s"}
{"level":"info","ts":"2024-07-03T04:29:06.756075Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-07-03T04:29:06.756515Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2024-07-03T04:29:06.756548Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2024-07-03T04:29:06.756106Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-07-03T04:29:06.759482Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.7:2379"}
{"level":"info","ts":"2024-07-03T04:29:06.760668Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2024-07-03T04:29:39.878455Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2024-07-03T04:29:39.878573Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"functional-502505","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.7:2380"],"advertise-client-urls":["https://192.168.39.7:2379"]}
{"level":"warn","ts":"2024-07-03T04:29:39.878657Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"warn","ts":"2024-07-03T04:29:39.878736Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"warn","ts":"2024-07-03T04:29:39.896249Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.7:2379: use of closed network connection"}
{"level":"warn","ts":"2024-07-03T04:29:39.896296Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.7:2379: use of closed network connection"}
{"level":"info","ts":"2024-07-03T04:29:39.896527Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"bb39151d8411994b","current-leader-member-id":"bb39151d8411994b"}
{"level":"info","ts":"2024-07-03T04:29:39.900187Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.7:2380"}
{"level":"info","ts":"2024-07-03T04:29:39.900365Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.7:2380"}
{"level":"info","ts":"2024-07-03T04:29:39.900455Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"functional-502505","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.7:2380"],"advertise-client-urls":["https://192.168.39.7:2379"]}
==> kernel <==
04:30:47 up 3 min, 0 users, load average: 2.08, 0.72, 0.26
Linux functional-502505 5.10.207 #1 SMP Tue Jul 2 18:53:17 UTC 2024 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2023.02.9"
==> kube-apiserver [6ffa7f8d09cb67a67f310c0d98c2f76308ceb96177f628f863713b7f9761a577] <==
I0703 04:29:59.968836 1 apf_controller.go:379] Running API Priority and Fairness config worker
I0703 04:29:59.969045 1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
I0703 04:29:59.969521 1 aggregator.go:165] initial CRD sync complete...
I0703 04:29:59.969662 1 autoregister_controller.go:141] Starting autoregister controller
I0703 04:29:59.969775 1 cache.go:32] Waiting for caches to sync for autoregister controller
I0703 04:29:59.969811 1 cache.go:39] Caches are synced for autoregister controller
I0703 04:29:59.975094 1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
E0703 04:29:59.999650 1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
I0703 04:30:00.048446 1 shared_informer.go:320] Caches are synced for node_authorizer
I0703 04:30:00.861228 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0703 04:30:01.912367 1 controller.go:615] quota admission added evaluator for: serviceaccounts
I0703 04:30:01.928728 1 controller.go:615] quota admission added evaluator for: deployments.apps
I0703 04:30:01.994769 1 controller.go:615] quota admission added evaluator for: daemonsets.apps
I0703 04:30:02.034885 1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0703 04:30:02.049073 1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0703 04:30:18.675192 1 controller.go:615] quota admission added evaluator for: endpoints
I0703 04:30:21.900719 1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.100.80.80"}
I0703 04:30:21.914628 1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0703 04:30:26.899120 1 controller.go:615] quota admission added evaluator for: replicasets.apps
I0703 04:30:26.993843 1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.97.138.228"}
I0703 04:30:28.050899 1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.106.185.11"}
I0703 04:30:40.373347 1 alloc.go:330] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.106.207.19"}
I0703 04:30:44.814188 1 controller.go:615] quota admission added evaluator for: namespaces
I0703 04:30:45.657648 1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.128.190"}
I0703 04:30:45.787065 1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.37.201"}
==> kube-controller-manager [a6b795c693f2efdbeff597fe344cacc689f8ef8214a1e30f4d22237ef34105ff] <==
I0703 04:29:20.567323 1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
I0703 04:29:20.567566 1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
I0703 04:29:20.568216 1 shared_informer.go:320] Caches are synced for taint-eviction-controller
I0703 04:29:20.570687 1 shared_informer.go:320] Caches are synced for ephemeral
I0703 04:29:20.572884 1 shared_informer.go:320] Caches are synced for HPA
I0703 04:29:20.575125 1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
I0703 04:29:20.578135 1 shared_informer.go:320] Caches are synced for expand
I0703 04:29:20.579401 1 shared_informer.go:320] Caches are synced for ReplicationController
I0703 04:29:20.580702 1 shared_informer.go:320] Caches are synced for certificate-csrapproving
I0703 04:29:20.606491 1 shared_informer.go:320] Caches are synced for taint
I0703 04:29:20.606625 1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
I0703 04:29:20.607319 1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-502505"
I0703 04:29:20.607648 1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
I0703 04:29:20.639498 1 shared_informer.go:320] Caches are synced for persistent volume
I0703 04:29:20.684524 1 shared_informer.go:320] Caches are synced for PV protection
I0703 04:29:20.686110 1 shared_informer.go:320] Caches are synced for attach detach
I0703 04:29:20.723557 1 shared_informer.go:320] Caches are synced for service account
I0703 04:29:20.750973 1 shared_informer.go:320] Caches are synced for namespace
I0703 04:29:20.773114 1 shared_informer.go:320] Caches are synced for resource quota
I0703 04:29:20.777540 1 shared_informer.go:320] Caches are synced for endpoint
I0703 04:29:20.792142 1 shared_informer.go:320] Caches are synced for resource quota
I0703 04:29:20.820203 1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
I0703 04:29:21.189195 1 shared_informer.go:320] Caches are synced for garbage collector
I0703 04:29:21.260747 1 shared_informer.go:320] Caches are synced for garbage collector
I0703 04:29:21.260926 1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
==> kube-controller-manager [b97699681e7066784af8d12bee3b5135464edf4557bf197ad6639d50ebcca6bb] <==
I0703 04:30:45.063491 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="174.137026ms"
E0703 04:30:45.063539 1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0703 04:30:45.087174 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="164.352318ms"
E0703 04:30:45.087385 1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0703 04:30:45.118462 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="54.125867ms"
E0703 04:30:45.118508 1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0703 04:30:45.156709 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="69.003525ms"
E0703 04:30:45.156780 1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0703 04:30:45.160329 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="41.793015ms"
E0703 04:30:45.160376 1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0703 04:30:45.188459 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="28.001816ms"
E0703 04:30:45.188720 1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0703 04:30:45.197485 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="40.673502ms"
E0703 04:30:45.197526 1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0703 04:30:45.200699 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="11.835631ms"
E0703 04:30:45.200741 1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0703 04:30:45.205167 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="7.617178ms"
E0703 04:30:45.205188 1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0703 04:30:45.378062 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="158.99929ms"
I0703 04:30:45.396031 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="164.108076ms"
I0703 04:30:45.467604 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="71.401883ms"
I0703 04:30:45.467680 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="31.869µs"
I0703 04:30:45.501891 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="123.768047ms"
I0703 04:30:45.501966 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="26.705µs"
I0703 04:30:45.534496 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="91.19µs"
==> kube-proxy [2d75fb5db730d72e85ad104eadb239da997af1a3483c2b533c8bf3b7f954ec3f] <==
I0703 04:30:01.327080 1 server_linux.go:69] "Using iptables proxy"
I0703 04:30:01.336490 1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.7"]
I0703 04:30:01.375629 1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
I0703 04:30:01.375678 1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I0703 04:30:01.375695 1 server_linux.go:165] "Using iptables Proxier"
I0703 04:30:01.378698 1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I0703 04:30:01.379130 1 server.go:872] "Version info" version="v1.30.2"
I0703 04:30:01.379161 1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0703 04:30:01.380338 1 config.go:192] "Starting service config controller"
I0703 04:30:01.380384 1 shared_informer.go:313] Waiting for caches to sync for service config
I0703 04:30:01.380452 1 config.go:101] "Starting endpoint slice config controller"
I0703 04:30:01.380457 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0703 04:30:01.381487 1 config.go:319] "Starting node config controller"
I0703 04:30:01.381513 1 shared_informer.go:313] Waiting for caches to sync for node config
I0703 04:30:01.481230 1 shared_informer.go:320] Caches are synced for service config
I0703 04:30:01.481299 1 shared_informer.go:320] Caches are synced for endpoint slice config
I0703 04:30:01.481673 1 shared_informer.go:320] Caches are synced for node config
==> kube-proxy [8ffcd519e31304c6e70748464e3fd58095226f9a7d59ee4f64c59119c83aadb7] <==
I0703 04:28:53.252670 1 server_linux.go:69] "Using iptables proxy"
E0703 04:28:53.260324 1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-502505\": dial tcp 192.168.39.7:8441: connect: connection refused"
E0703 04:28:54.274633 1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-502505\": dial tcp 192.168.39.7:8441: connect: connection refused"
E0703 04:28:56.283899 1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-502505\": dial tcp 192.168.39.7:8441: connect: connection refused"
E0703 04:29:00.960148 1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-502505\": dial tcp 192.168.39.7:8441: connect: connection refused"
I0703 04:29:09.700955 1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.7"]
I0703 04:29:09.735290 1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
I0703 04:29:09.735341 1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I0703 04:29:09.735358 1 server_linux.go:165] "Using iptables Proxier"
I0703 04:29:09.737976 1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I0703 04:29:09.738375 1 server.go:872] "Version info" version="v1.30.2"
I0703 04:29:09.738723 1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0703 04:29:09.740151 1 config.go:192] "Starting service config controller"
I0703 04:29:09.740188 1 shared_informer.go:313] Waiting for caches to sync for service config
I0703 04:29:09.740215 1 config.go:101] "Starting endpoint slice config controller"
I0703 04:29:09.740240 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0703 04:29:09.740877 1 config.go:319] "Starting node config controller"
I0703 04:29:09.740907 1 shared_informer.go:313] Waiting for caches to sync for node config
I0703 04:29:09.840379 1 shared_informer.go:320] Caches are synced for endpoint slice config
I0703 04:29:09.840519 1 shared_informer.go:320] Caches are synced for service config
I0703 04:29:09.840994 1 shared_informer.go:320] Caches are synced for node config
==> kube-scheduler [c8e41c6173772c406e61d6ff4ae97f8c22aba3e1de1f9439658992549b987208] <==
I0703 04:29:05.992588 1 serving.go:380] Generated self-signed cert in-memory
I0703 04:29:08.104138 1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
I0703 04:29:08.104290 1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0703 04:29:08.110008 1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
I0703 04:29:08.110266 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
I0703 04:29:08.110355 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
I0703 04:29:08.110530 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0703 04:29:08.112500 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0703 04:29:08.112600 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0703 04:29:08.112624 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I0703 04:29:08.112772 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0703 04:29:08.210569 1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
I0703 04:29:08.213166 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0703 04:29:08.213556 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0703 04:29:39.960944 1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
I0703 04:29:39.961081 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
I0703 04:29:39.961264 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0703 04:29:39.961326 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I0703 04:29:39.961348 1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController
E0703 04:29:39.961343 1 run.go:74] "command failed" err="finished without leader elect"
==> kube-scheduler [df86e2c18a48bbc09e7082b9546dc32b019922e5eab63a3c4d24ad60adcbeca4] <==
I0703 04:29:57.985808 1 serving.go:380] Generated self-signed cert in-memory
W0703 04:29:59.926879 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0703 04:29:59.926920 1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0703 04:29:59.927006 1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
W0703 04:29:59.927013 1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0703 04:29:59.983862 1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
I0703 04:29:59.985881 1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0703 04:29:59.989177 1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
I0703 04:29:59.992029 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0703 04:30:00.000251 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0703 04:29:59.992129 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0703 04:30:00.100954 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Jul 03 04:30:30 functional-502505 kubelet[3959]: I0703 04:30:30.856292 3959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-node-connect-57b4589c47-7njdl" podStartSLOduration=2.183319736 podStartE2EDuration="4.856273103s" podCreationTimestamp="2024-07-03 04:30:26 +0000 UTC" firstStartedPulling="2024-07-03 04:30:27.499671808 +0000 UTC m=+30.960368057" lastFinishedPulling="2024-07-03 04:30:30.172625164 +0000 UTC m=+33.633321424" observedRunningTime="2024-07-03 04:30:30.855470986 +0000 UTC m=+34.316167253" watchObservedRunningTime="2024-07-03 04:30:30.856273103 +0000 UTC m=+34.316969360"
Jul 03 04:30:35 functional-502505 kubelet[3959]: I0703 04:30:35.001936 3959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-node-6d85cfcfd8-cwf9k" podStartSLOduration=4.024547563 podStartE2EDuration="8.001918572s" podCreationTimestamp="2024-07-03 04:30:27 +0000 UTC" firstStartedPulling="2024-07-03 04:30:28.770833641 +0000 UTC m=+32.231529894" lastFinishedPulling="2024-07-03 04:30:32.748204644 +0000 UTC m=+36.208900903" observedRunningTime="2024-07-03 04:30:33.894551076 +0000 UTC m=+37.355247343" watchObservedRunningTime="2024-07-03 04:30:35.001918572 +0000 UTC m=+38.462614840"
Jul 03 04:30:35 functional-502505 kubelet[3959]: I0703 04:30:35.118114 3959 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/45cbb956-6396-45ea-be8b-0b0b06dcc5f8-test-volume\") pod \"45cbb956-6396-45ea-be8b-0b0b06dcc5f8\" (UID: \"45cbb956-6396-45ea-be8b-0b0b06dcc5f8\") "
Jul 03 04:30:35 functional-502505 kubelet[3959]: I0703 04:30:35.118163 3959 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlgnt\" (UniqueName: \"kubernetes.io/projected/45cbb956-6396-45ea-be8b-0b0b06dcc5f8-kube-api-access-qlgnt\") pod \"45cbb956-6396-45ea-be8b-0b0b06dcc5f8\" (UID: \"45cbb956-6396-45ea-be8b-0b0b06dcc5f8\") "
Jul 03 04:30:35 functional-502505 kubelet[3959]: I0703 04:30:35.118359 3959 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45cbb956-6396-45ea-be8b-0b0b06dcc5f8-test-volume" (OuterVolumeSpecName: "test-volume") pod "45cbb956-6396-45ea-be8b-0b0b06dcc5f8" (UID: "45cbb956-6396-45ea-be8b-0b0b06dcc5f8"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Jul 03 04:30:35 functional-502505 kubelet[3959]: I0703 04:30:35.120675 3959 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45cbb956-6396-45ea-be8b-0b0b06dcc5f8-kube-api-access-qlgnt" (OuterVolumeSpecName: "kube-api-access-qlgnt") pod "45cbb956-6396-45ea-be8b-0b0b06dcc5f8" (UID: "45cbb956-6396-45ea-be8b-0b0b06dcc5f8"). InnerVolumeSpecName "kube-api-access-qlgnt". PluginName "kubernetes.io/projected", VolumeGidValue ""
Jul 03 04:30:35 functional-502505 kubelet[3959]: I0703 04:30:35.219273 3959 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-qlgnt\" (UniqueName: \"kubernetes.io/projected/45cbb956-6396-45ea-be8b-0b0b06dcc5f8-kube-api-access-qlgnt\") on node \"functional-502505\" DevicePath \"\""
Jul 03 04:30:35 functional-502505 kubelet[3959]: I0703 04:30:35.219605 3959 reconciler_common.go:289] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/45cbb956-6396-45ea-be8b-0b0b06dcc5f8-test-volume\") on node \"functional-502505\" DevicePath \"\""
Jul 03 04:30:35 functional-502505 kubelet[3959]: I0703 04:30:35.385557 3959 topology_manager.go:215] "Topology Admit Handler" podUID="6364f737-43b9-4e1f-a857-b6edb68c8b98" podNamespace="default" podName="sp-pod"
Jul 03 04:30:35 functional-502505 kubelet[3959]: E0703 04:30:35.385740 3959 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="45cbb956-6396-45ea-be8b-0b0b06dcc5f8" containerName="mount-munger"
Jul 03 04:30:35 functional-502505 kubelet[3959]: I0703 04:30:35.385796 3959 memory_manager.go:354] "RemoveStaleState removing state" podUID="45cbb956-6396-45ea-be8b-0b0b06dcc5f8" containerName="mount-munger"
Jul 03 04:30:35 functional-502505 kubelet[3959]: I0703 04:30:35.522379 3959 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b0108cbd-122c-40dd-9f09-62f07633b3cd\" (UniqueName: \"kubernetes.io/host-path/6364f737-43b9-4e1f-a857-b6edb68c8b98-pvc-b0108cbd-122c-40dd-9f09-62f07633b3cd\") pod \"sp-pod\" (UID: \"6364f737-43b9-4e1f-a857-b6edb68c8b98\") " pod="default/sp-pod"
Jul 03 04:30:35 functional-502505 kubelet[3959]: I0703 04:30:35.522814 3959 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mln4c\" (UniqueName: \"kubernetes.io/projected/6364f737-43b9-4e1f-a857-b6edb68c8b98-kube-api-access-mln4c\") pod \"sp-pod\" (UID: \"6364f737-43b9-4e1f-a857-b6edb68c8b98\") " pod="default/sp-pod"
Jul 03 04:30:35 functional-502505 kubelet[3959]: I0703 04:30:35.875937 3959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3a607b98ddc1d5d56db7846a3f538166a3d22ed1baffb81bdbdf98628e7ff8d"
Jul 03 04:30:40 functional-502505 kubelet[3959]: I0703 04:30:40.505881 3959 topology_manager.go:215] "Topology Admit Handler" podUID="9677e95a-f370-4c57-8eb2-e7e44dd91562" podNamespace="default" podName="mysql-64454c8b5c-x9ns2"
Jul 03 04:30:40 functional-502505 kubelet[3959]: I0703 04:30:40.663918 3959 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6kkg\" (UniqueName: \"kubernetes.io/projected/9677e95a-f370-4c57-8eb2-e7e44dd91562-kube-api-access-z6kkg\") pod \"mysql-64454c8b5c-x9ns2\" (UID: \"9677e95a-f370-4c57-8eb2-e7e44dd91562\") " pod="default/mysql-64454c8b5c-x9ns2"
Jul 03 04:30:45 functional-502505 kubelet[3959]: I0703 04:30:45.389571 3959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=4.141531456 podStartE2EDuration="10.389553893s" podCreationTimestamp="2024-07-03 04:30:35 +0000 UTC" firstStartedPulling="2024-07-03 04:30:35.89260554 +0000 UTC m=+39.353301789" lastFinishedPulling="2024-07-03 04:30:42.140627965 +0000 UTC m=+45.601324226" observedRunningTime="2024-07-03 04:30:42.916535786 +0000 UTC m=+46.377232055" watchObservedRunningTime="2024-07-03 04:30:45.389553893 +0000 UTC m=+48.850250161"
Jul 03 04:30:45 functional-502505 kubelet[3959]: I0703 04:30:45.389933 3959 topology_manager.go:215] "Topology Admit Handler" podUID="e6761bd4-0a9c-40f0-bc6e-b6455a5a7b9c" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-779776cb65-mnh76"
Jul 03 04:30:45 functional-502505 kubelet[3959]: I0703 04:30:45.392577 3959 topology_manager.go:215] "Topology Admit Handler" podUID="ceb7d87e-e07a-4c85-b378-65b5ef7814a9" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-b5fc48f67-vktg7"
Jul 03 04:30:45 functional-502505 kubelet[3959]: W0703 04:30:45.422592 3959 reflector.go:547] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:functional-502505" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'functional-502505' and this object
Jul 03 04:30:45 functional-502505 kubelet[3959]: E0703 04:30:45.422699 3959 reflector.go:150] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:functional-502505" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'functional-502505' and this object
Jul 03 04:30:45 functional-502505 kubelet[3959]: I0703 04:30:45.498637 3959 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpsgl\" (UniqueName: \"kubernetes.io/projected/e6761bd4-0a9c-40f0-bc6e-b6455a5a7b9c-kube-api-access-bpsgl\") pod \"kubernetes-dashboard-779776cb65-mnh76\" (UID: \"e6761bd4-0a9c-40f0-bc6e-b6455a5a7b9c\") " pod="kubernetes-dashboard/kubernetes-dashboard-779776cb65-mnh76"
Jul 03 04:30:45 functional-502505 kubelet[3959]: I0703 04:30:45.498697 3959 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ceb7d87e-e07a-4c85-b378-65b5ef7814a9-tmp-volume\") pod \"dashboard-metrics-scraper-b5fc48f67-vktg7\" (UID: \"ceb7d87e-e07a-4c85-b378-65b5ef7814a9\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-vktg7"
Jul 03 04:30:45 functional-502505 kubelet[3959]: I0703 04:30:45.498722 3959 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbjkc\" (UniqueName: \"kubernetes.io/projected/ceb7d87e-e07a-4c85-b378-65b5ef7814a9-kube-api-access-kbjkc\") pod \"dashboard-metrics-scraper-b5fc48f67-vktg7\" (UID: \"ceb7d87e-e07a-4c85-b378-65b5ef7814a9\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-vktg7"
Jul 03 04:30:45 functional-502505 kubelet[3959]: I0703 04:30:45.498753 3959 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e6761bd4-0a9c-40f0-bc6e-b6455a5a7b9c-tmp-volume\") pod \"kubernetes-dashboard-779776cb65-mnh76\" (UID: \"e6761bd4-0a9c-40f0-bc6e-b6455a5a7b9c\") " pod="kubernetes-dashboard/kubernetes-dashboard-779776cb65-mnh76"
==> storage-provisioner [4fa84086025c891289da45d161c96d31e29204a01d8499ff745db7ecf20b92aa] <==
I0703 04:30:01.264214 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0703 04:30:01.276687 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0703 04:30:01.276753 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0703 04:30:18.682191 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0703 04:30:18.682585 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-502505_49d754df-956c-4afb-a7e7-b102534e84bb!
I0703 04:30:18.683492 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a84bf370-0499-476c-9405-a83d581135e6", APIVersion:"v1", ResourceVersion:"609", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-502505_49d754df-956c-4afb-a7e7-b102534e84bb became leader
I0703 04:30:18.783841 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-502505_49d754df-956c-4afb-a7e7-b102534e84bb!
I0703 04:30:32.535550 1 controller.go:1332] provision "default/myclaim" class "standard": started
I0703 04:30:32.536594 1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard fc4694f2-480d-42f7-95de-3178fadbbf36 383 0 2024-07-03 04:28:38 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
storageclass.kubernetes.io/is-default-class:true] [] [] [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-07-03 04:28:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-b0108cbd-122c-40dd-9f09-62f07633b3cd &PersistentVolumeClaim{ObjectMeta:{myclaim default b0108cbd-122c-40dd-9f09-62f07633b3cd 705 0 2024-07-03 04:30:32 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection] [{kube-controller-manager Update v1 2024-07-03 04:30:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-07-03 04:30:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
I0703 04:30:32.537663 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"b0108cbd-122c-40dd-9f09-62f07633b3cd", APIVersion:"v1", ResourceVersion:"705", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
I0703 04:30:32.537967 1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-b0108cbd-122c-40dd-9f09-62f07633b3cd" provisioned
I0703 04:30:32.538012 1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
I0703 04:30:32.538024 1 volume_store.go:212] Trying to save persistentvolume "pvc-b0108cbd-122c-40dd-9f09-62f07633b3cd"
I0703 04:30:32.601273 1 volume_store.go:219] persistentvolume "pvc-b0108cbd-122c-40dd-9f09-62f07633b3cd" saved
I0703 04:30:32.604596 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"b0108cbd-122c-40dd-9f09-62f07633b3cd", APIVersion:"v1", ResourceVersion:"705", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-b0108cbd-122c-40dd-9f09-62f07633b3cd
==> storage-provisioner [e10f91f31df27081e9585ebfaaa185dd7c123f94fa5fd567a5c3e4fb6e0253bb] <==
I0703 04:29:54.444222 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F0703 04:29:54.445969 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-502505 -n functional-502505
helpers_test.go:261: (dbg) Run: kubectl --context functional-502505 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount mysql-64454c8b5c-x9ns2 dashboard-metrics-scraper-b5fc48f67-vktg7 kubernetes-dashboard-779776cb65-mnh76
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context functional-502505 describe pod busybox-mount mysql-64454c8b5c-x9ns2 dashboard-metrics-scraper-b5fc48f67-vktg7 kubernetes-dashboard-779776cb65-mnh76
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-502505 describe pod busybox-mount mysql-64454c8b5c-x9ns2 dashboard-metrics-scraper-b5fc48f67-vktg7 kubernetes-dashboard-779776cb65-mnh76: exit status 1 (77.874083ms)
-- stdout --
Name: busybox-mount
Namespace: default
Priority: 0
Service Account: default
Node: functional-502505/192.168.39.7
Start Time: Wed, 03 Jul 2024 04:30:27 +0000
Labels: integration-test=busybox-mount
Annotations: <none>
Status: Succeeded
IP: 10.244.0.5
IPs:
IP: 10.244.0.5
Containers:
mount-munger:
Container ID: containerd://3f7350c99caad76816d94685e180db1d2ceaaade213649297229c537f5f1937b
Image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID: gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
Port: <none>
Host Port: <none>
Command:
/bin/sh
-c
--
Args:
cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 03 Jul 2024 04:30:32 +0000
Finished: Wed, 03 Jul 2024 04:30:32 +0000
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/mount-9p from test-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qlgnt (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
test-volume:
Type: HostPath (bare host directory volume)
Path: /mount-9p
HostPathType:
kube-api-access-qlgnt:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 21s default-scheduler Successfully assigned default/busybox-mount to functional-502505
Normal Pulling 20s kubelet Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Normal Pulled 16s kubelet Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.485s (3.906s including waiting). Image size: 2395207 bytes.
Normal Created 16s kubelet Created container mount-munger
Normal Started 16s kubelet Started container mount-munger
Name: mysql-64454c8b5c-x9ns2
Namespace: default
Priority: 0
Service Account: default
Node: functional-502505/192.168.39.7
Start Time: Wed, 03 Jul 2024 04:30:40 +0000
Labels: app=mysql
pod-template-hash=64454c8b5c
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/mysql-64454c8b5c
Containers:
mysql:
Container ID:
Image: docker.io/mysql:5.7
Image ID:
Port: 3306/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Limits:
cpu: 700m
memory: 700Mi
Requests:
cpu: 600m
memory: 512Mi
Environment:
MYSQL_ROOT_PASSWORD: password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z6kkg (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-z6kkg:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 8s default-scheduler Successfully assigned default/mysql-64454c8b5c-x9ns2 to functional-502505
Normal Pulling 7s kubelet Pulling image "docker.io/mysql:5.7"
-- /stdout --
** stderr **
Error from server (NotFound): pods "dashboard-metrics-scraper-b5fc48f67-vktg7" not found
Error from server (NotFound): pods "kubernetes-dashboard-779776cb65-mnh76" not found
** /stderr **
helpers_test.go:279: kubectl --context functional-502505 describe pod busybox-mount mysql-64454c8b5c-x9ns2 dashboard-metrics-scraper-b5fc48f67-vktg7 kubernetes-dashboard-779776cb65-mnh76: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (5.34s)