Test Report: KVM_Linux 21512

                    
                      67b6671f4b7f755dd397ae36ae992d15d1f5bc42:2025-09-08:41332
                    
                

Test fail (3/345)

Order failed test Duration
91 TestFunctional/parallel/DashboardCmd 301.98
100 TestFunctional/parallel/PersistentVolumeClaim 368.63
104 TestFunctional/parallel/MySQL 602.4
x
+
TestFunctional/parallel/DashboardCmd (301.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-799296 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-799296 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-799296 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-799296 --alsologtostderr -v=1] stderr:
I0908 11:11:35.335104  372718 out.go:360] Setting OutFile to fd 1 ...
I0908 11:11:35.335376  372718 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 11:11:35.335386  372718 out.go:374] Setting ErrFile to fd 2...
I0908 11:11:35.335391  372718 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 11:11:35.335629  372718 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-360138/.minikube/bin
I0908 11:11:35.336001  372718 mustload.go:65] Loading cluster: functional-799296
I0908 11:11:35.336393  372718 config.go:182] Loaded profile config "functional-799296": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0908 11:11:35.336803  372718 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21512-360138/.minikube/bin/docker-machine-driver-kvm2
I0908 11:11:35.336874  372718 main.go:141] libmachine: Launching plugin server for driver kvm2
I0908 11:11:35.354589  372718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35945
I0908 11:11:35.355183  372718 main.go:141] libmachine: () Calling .GetVersion
I0908 11:11:35.355825  372718 main.go:141] libmachine: Using API Version  1
I0908 11:11:35.355857  372718 main.go:141] libmachine: () Calling .SetConfigRaw
I0908 11:11:35.356289  372718 main.go:141] libmachine: () Calling .GetMachineName
I0908 11:11:35.356557  372718 main.go:141] libmachine: (functional-799296) Calling .GetState
I0908 11:11:35.358508  372718 host.go:66] Checking if "functional-799296" exists ...
I0908 11:11:35.358952  372718 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21512-360138/.minikube/bin/docker-machine-driver-kvm2
I0908 11:11:35.359001  372718 main.go:141] libmachine: Launching plugin server for driver kvm2
I0908 11:11:35.376816  372718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39839
I0908 11:11:35.377262  372718 main.go:141] libmachine: () Calling .GetVersion
I0908 11:11:35.377817  372718 main.go:141] libmachine: Using API Version  1
I0908 11:11:35.377844  372718 main.go:141] libmachine: () Calling .SetConfigRaw
I0908 11:11:35.378215  372718 main.go:141] libmachine: () Calling .GetMachineName
I0908 11:11:35.378438  372718 main.go:141] libmachine: (functional-799296) Calling .DriverName
I0908 11:11:35.378573  372718 api_server.go:166] Checking apiserver status ...
I0908 11:11:35.378648  372718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0908 11:11:35.378691  372718 main.go:141] libmachine: (functional-799296) Calling .GetSSHHostname
I0908 11:11:35.381649  372718 main.go:141] libmachine: (functional-799296) DBG | domain functional-799296 has defined MAC address 52:54:00:58:fa:55 in network mk-functional-799296
I0908 11:11:35.382059  372718 main.go:141] libmachine: (functional-799296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:fa:55", ip: ""} in network mk-functional-799296: {Iface:virbr1 ExpiryTime:2025-09-08 12:08:00 +0000 UTC Type:0 Mac:52:54:00:58:fa:55 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:functional-799296 Clientid:01:52:54:00:58:fa:55}
I0908 11:11:35.382102  372718 main.go:141] libmachine: (functional-799296) DBG | domain functional-799296 has defined IP address 192.168.39.63 and MAC address 52:54:00:58:fa:55 in network mk-functional-799296
I0908 11:11:35.382189  372718 main.go:141] libmachine: (functional-799296) Calling .GetSSHPort
I0908 11:11:35.382421  372718 main.go:141] libmachine: (functional-799296) Calling .GetSSHKeyPath
I0908 11:11:35.382586  372718 main.go:141] libmachine: (functional-799296) Calling .GetSSHUsername
I0908 11:11:35.382752  372718 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21512-360138/.minikube/machines/functional-799296/id_rsa Username:docker}
I0908 11:11:35.480055  372718 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/10112/cgroup
W0908 11:11:35.494207  372718 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/10112/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I0908 11:11:35.494277  372718 ssh_runner.go:195] Run: ls
I0908 11:11:35.500715  372718 api_server.go:253] Checking apiserver healthz at https://192.168.39.63:8441/healthz ...
I0908 11:11:35.506006  372718 api_server.go:279] https://192.168.39.63:8441/healthz returned 200:
ok
W0908 11:11:35.506060  372718 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I0908 11:11:35.506253  372718 config.go:182] Loaded profile config "functional-799296": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0908 11:11:35.506279  372718 addons.go:69] Setting dashboard=true in profile "functional-799296"
I0908 11:11:35.506291  372718 addons.go:238] Setting addon dashboard=true in "functional-799296"
I0908 11:11:35.506319  372718 host.go:66] Checking if "functional-799296" exists ...
I0908 11:11:35.506557  372718 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21512-360138/.minikube/bin/docker-machine-driver-kvm2
I0908 11:11:35.506607  372718 main.go:141] libmachine: Launching plugin server for driver kvm2
I0908 11:11:35.523353  372718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40039
I0908 11:11:35.523944  372718 main.go:141] libmachine: () Calling .GetVersion
I0908 11:11:35.524448  372718 main.go:141] libmachine: Using API Version  1
I0908 11:11:35.524473  372718 main.go:141] libmachine: () Calling .SetConfigRaw
I0908 11:11:35.524863  372718 main.go:141] libmachine: () Calling .GetMachineName
I0908 11:11:35.525379  372718 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21512-360138/.minikube/bin/docker-machine-driver-kvm2
I0908 11:11:35.525437  372718 main.go:141] libmachine: Launching plugin server for driver kvm2
I0908 11:11:35.541852  372718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36343
I0908 11:11:35.542272  372718 main.go:141] libmachine: () Calling .GetVersion
I0908 11:11:35.542739  372718 main.go:141] libmachine: Using API Version  1
I0908 11:11:35.542768  372718 main.go:141] libmachine: () Calling .SetConfigRaw
I0908 11:11:35.543128  372718 main.go:141] libmachine: () Calling .GetMachineName
I0908 11:11:35.543299  372718 main.go:141] libmachine: (functional-799296) Calling .GetState
I0908 11:11:35.545001  372718 main.go:141] libmachine: (functional-799296) Calling .DriverName
I0908 11:11:35.547485  372718 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0908 11:11:35.549144  372718 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I0908 11:11:35.550397  372718 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0908 11:11:35.550414  372718 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0908 11:11:35.550441  372718 main.go:141] libmachine: (functional-799296) Calling .GetSSHHostname
I0908 11:11:35.553644  372718 main.go:141] libmachine: (functional-799296) DBG | domain functional-799296 has defined MAC address 52:54:00:58:fa:55 in network mk-functional-799296
I0908 11:11:35.554051  372718 main.go:141] libmachine: (functional-799296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:fa:55", ip: ""} in network mk-functional-799296: {Iface:virbr1 ExpiryTime:2025-09-08 12:08:00 +0000 UTC Type:0 Mac:52:54:00:58:fa:55 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:functional-799296 Clientid:01:52:54:00:58:fa:55}
I0908 11:11:35.554087  372718 main.go:141] libmachine: (functional-799296) DBG | domain functional-799296 has defined IP address 192.168.39.63 and MAC address 52:54:00:58:fa:55 in network mk-functional-799296
I0908 11:11:35.554255  372718 main.go:141] libmachine: (functional-799296) Calling .GetSSHPort
I0908 11:11:35.554485  372718 main.go:141] libmachine: (functional-799296) Calling .GetSSHKeyPath
I0908 11:11:35.554682  372718 main.go:141] libmachine: (functional-799296) Calling .GetSSHUsername
I0908 11:11:35.554848  372718 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21512-360138/.minikube/machines/functional-799296/id_rsa Username:docker}
I0908 11:11:35.650834  372718 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0908 11:11:35.650869  372718 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0908 11:11:35.674352  372718 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0908 11:11:35.674387  372718 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0908 11:11:35.698687  372718 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0908 11:11:35.698718  372718 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0908 11:11:35.721254  372718 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0908 11:11:35.721281  372718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I0908 11:11:35.744089  372718 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0908 11:11:35.744123  372718 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0908 11:11:35.767365  372718 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0908 11:11:35.767400  372718 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0908 11:11:35.789824  372718 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0908 11:11:35.789856  372718 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0908 11:11:35.813285  372718 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0908 11:11:35.813312  372718 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0908 11:11:35.839297  372718 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0908 11:11:35.839325  372718 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0908 11:11:35.865635  372718 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0908 11:11:36.853488  372718 main.go:141] libmachine: Making call to close driver server
I0908 11:11:36.853529  372718 main.go:141] libmachine: (functional-799296) Calling .Close
I0908 11:11:36.854002  372718 main.go:141] libmachine: Successfully made call to close driver server
I0908 11:11:36.854022  372718 main.go:141] libmachine: Making call to close connection to plugin binary
I0908 11:11:36.854031  372718 main.go:141] libmachine: Making call to close driver server
I0908 11:11:36.854038  372718 main.go:141] libmachine: (functional-799296) Calling .Close
I0908 11:11:36.854299  372718 main.go:141] libmachine: Successfully made call to close driver server
I0908 11:11:36.854314  372718 main.go:141] libmachine: Making call to close connection to plugin binary
I0908 11:11:36.855917  372718 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-799296 addons enable metrics-server

                                                
                                                
I0908 11:11:36.857358  372718 addons.go:201] Writing out "functional-799296" config to set dashboard=true...
W0908 11:11:36.857599  372718 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I0908 11:11:36.858291  372718 kapi.go:59] client config for functional-799296: &rest.Config{Host:"https://192.168.39.63:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21512-360138/.minikube/profiles/functional-799296/client.crt", KeyFile:"/home/jenkins/minikube-integration/21512-360138/.minikube/profiles/functional-799296/client.key", CAFile:"/home/jenkins/minikube-integration/21512-360138/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25a3920), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0908 11:11:36.858762  372718 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0908 11:11:36.858782  372718 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0908 11:11:36.858786  372718 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0908 11:11:36.858792  372718 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I0908 11:11:36.858799  372718 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0908 11:11:36.874268  372718 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  0d1faebe-58e7-48ae-899b-789c325ea834 865 0 2025-09-08 11:11:36 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-09-08 11:11:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.106.225.130,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.106.225.130],IPFamilies:[IPv4],AllocateLoadBalan
cerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W0908 11:11:36.874479  372718 out.go:285] * Launching proxy ...
* Launching proxy ...
I0908 11:11:36.874580  372718 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-799296 proxy --port 36195]
I0908 11:11:36.874947  372718 dashboard.go:157] Waiting for kubectl to output host:port ...
I0908 11:11:36.923818  372718 dashboard.go:175] proxy stdout: Starting to serve on 127.0.0.1:36195
W0908 11:11:36.923857  372718 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I0908 11:11:36.944172  372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a0c8c7c6-42d9-407b-9c6a-d9dfd1115650] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:11:36 GMT]] Body:0xc000251480 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000254a00 TLS:<nil>}
I0908 11:11:36.944270  372718 retry.go:31] will retry after 75.817µs: Temporary Error: unexpected response code: 503
I0908 11:11:36.948009  372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f3e5a096-543d-417a-b85a-e2257a129c03] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:11:36 GMT]] Body:0xc0006a1dc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017c6000 TLS:<nil>}
I0908 11:11:36.948081  372718 retry.go:31] will retry after 153.92µs: Temporary Error: unexpected response code: 503
I0908 11:11:36.957545  372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[57ff71fa-876f-459c-83bc-6bee7625ea28] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:11:36 GMT]] Body:0xc000944000 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000254b40 TLS:<nil>}
I0908 11:11:36.957625  372718 retry.go:31] will retry after 316.897µs: Temporary Error: unexpected response code: 503
I0908 11:11:36.963453  372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8914a869-1c91-450b-9397-195c82c40a23] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:11:36 GMT]] Body:0xc0006a1ec0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017c6140 TLS:<nil>}
I0908 11:11:36.963526  372718 retry.go:31] will retry after 455.346µs: Temporary Error: unexpected response code: 503
I0908 11:11:36.973948  372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a08e0027-743a-4696-91a4-e0b892d7e73a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:11:36 GMT]] Body:0xc0009a4a80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000254c80 TLS:<nil>}
I0908 11:11:36.974027  372718 retry.go:31] will retry after 751.4µs: Temporary Error: unexpected response code: 503
I0908 11:11:36.984903  372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5d21a8a6-d59b-42b4-b304-ea78e3a48893] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:11:36 GMT]] Body:0xc000944140 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000254f00 TLS:<nil>}
I0908 11:11:36.984998  372718 retry.go:31] will retry after 899.842µs: Temporary Error: unexpected response code: 503
I0908 11:11:36.990164  372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[161c4b8c-9645-4342-8927-39069f5c6449] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:11:36 GMT]] Body:0xc0009a4b80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017c6280 TLS:<nil>}
I0908 11:11:36.990259  372718 retry.go:31] will retry after 848.699µs: Temporary Error: unexpected response code: 503
I0908 11:11:36.995703  372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[96b32e4e-2a85-4582-9a7e-0aaaf9f718ff] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:11:36 GMT]] Body:0xc000b63900 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000255040 TLS:<nil>}
I0908 11:11:36.995785  372718 retry.go:31] will retry after 2.534358ms: Temporary Error: unexpected response code: 503
I0908 11:11:37.016207  372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[be9518ff-8847-4f49-a327-f42f08597b91] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:11:37 GMT]] Body:0xc0009a4f80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00166e3c0 TLS:<nil>}
I0908 11:11:37.016289  372718 retry.go:31] will retry after 2.301065ms: Temporary Error: unexpected response code: 503
I0908 11:11:37.025565  372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b37dd137-3754-4325-981d-f9dc441ec3a0] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:11:37 GMT]] Body:0xc000944240 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000255180 TLS:<nil>}
I0908 11:11:37.025632  372718 retry.go:31] will retry after 2.477381ms: Temporary Error: unexpected response code: 503
I0908 11:11:37.037666  372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[efc20365-8d46-4c2d-aef2-02c965455437] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:11:37 GMT]] Body:0xc000b63a00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017c63c0 TLS:<nil>}
I0908 11:11:37.037740  372718 retry.go:31] will retry after 6.290045ms: Temporary Error: unexpected response code: 503
I0908 11:11:37.051940  372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f39203b2-28b5-43f4-82ab-cf96cc7b12cc] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:11:37 GMT]] Body:0xc0009a5080 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00166e500 TLS:<nil>}
I0908 11:11:37.052025  372718 retry.go:31] will retry after 5.984699ms: Temporary Error: unexpected response code: 503
I0908 11:11:37.064416  372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0a34184e-1469-46bf-9474-d261d9712b13] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:11:37 GMT]] Body:0xc000b63ac0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002552c0 TLS:<nil>}
I0908 11:11:37.064498  372718 retry.go:31] will retry after 17.26476ms: Temporary Error: unexpected response code: 503
I0908 11:11:37.087899  372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[165b1b7f-7d41-4af5-be6e-748039c8af0f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:11:37 GMT]] Body:0xc0009a5200 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00166e640 TLS:<nil>}
I0908 11:11:37.087986  372718 retry.go:31] will retry after 25.109024ms: Temporary Error: unexpected response code: 503
I0908 11:11:37.119040  372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2ce101df-ad53-463e-b756-7db653612d56] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:11:37 GMT]] Body:0xc000944380 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000255400 TLS:<nil>}
I0908 11:11:37.119154  372718 retry.go:31] will retry after 27.447501ms: Temporary Error: unexpected response code: 503
I0908 11:11:37.154594  372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[edb61309-d8b2-48bb-8975-30d331952f8e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:11:37 GMT]] Body:0xc000b63bc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017c6500 TLS:<nil>}
I0908 11:11:37.154708  372718 retry.go:31] will retry after 31.663509ms: Temporary Error: unexpected response code: 503
I0908 11:11:37.191078  372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cfc2e302-46fa-4d41-a73f-fe14d180c003] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:11:37 GMT]] Body:0xc000944480 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00166e780 TLS:<nil>}
I0908 11:11:37.191177  372718 retry.go:31] will retry after 94.549552ms: Temporary Error: unexpected response code: 503
I0908 11:11:37.297218  372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3f87e9e7-a5a6-4212-947e-2830adce795b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:11:37 GMT]] Body:0xc000b63cc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017c6640 TLS:<nil>}
I0908 11:11:37.297291  372718 retry.go:31] will retry after 123.169227ms: Temporary Error: unexpected response code: 503
I0908 11:11:37.424537  372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d148a078-9298-41e5-95d5-12e26eee1105] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:11:37 GMT]] Body:0xc000b63d80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00166e8c0 TLS:<nil>}
I0908 11:11:37.424606  372718 retry.go:31] will retry after 97.82893ms: Temporary Error: unexpected response code: 503
I0908 11:11:37.526339  372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[734de8eb-173c-4985-8219-452253e07819] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:11:37 GMT]] Body:0xc000b63e80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00166ea00 TLS:<nil>}
I0908 11:11:37.526409  372718 retry.go:31] will retry after 132.291895ms: Temporary Error: unexpected response code: 503
I0908 11:11:37.663790  372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4d10e295-5842-489d-b41d-21b74862ee17] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:11:37 GMT]] Body:0xc000b63f40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00166eb40 TLS:<nil>}
I0908 11:11:37.663865  372718 retry.go:31] will retry after 472.21272ms: Temporary Error: unexpected response code: 503
I0908 11:11:38.143539  372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ee2c2fe3-4e02-4deb-95cb-81db3919d1da] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:11:38 GMT]] Body:0xc0009a5380 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00166ec80 TLS:<nil>}
I0908 11:11:38.143647  372718 retry.go:31] will retry after 574.282313ms: Temporary Error: unexpected response code: 503
I0908 11:11:38.722050  372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1b9db4a8-6de3-4d54-9ec8-8471154363ce] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:11:38 GMT]] Body:0xc000944600 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000255540 TLS:<nil>}
I0908 11:11:38.722129  372718 retry.go:31] will retry after 548.130911ms: Temporary Error: unexpected response code: 503
I0908 11:11:39.274672  372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a9956e46-a67d-422d-9c14-fe09a65be669] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:11:39 GMT]] Body:0xc000a3e0c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017c6780 TLS:<nil>}
I0908 11:11:39.274776  372718 retry.go:31] will retry after 652.67111ms: Temporary Error: unexpected response code: 503
I0908 11:11:39.932679  372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7896bcde-2ed1-41de-80bb-42f54569ad84] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:11:39 GMT]] Body:0xc0009a54c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00166edc0 TLS:<nil>}
I0908 11:11:39.932783  372718 retry.go:31] will retry after 1.108670248s: Temporary Error: unexpected response code: 503
I0908 11:11:41.046567  372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b4df7fef-1270-473b-9e5d-344c205818a7] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:11:41 GMT]] Body:0xc000944780 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017c68c0 TLS:<nil>}
I0908 11:11:41.046672  372718 retry.go:31] will retry after 2.561254959s: Temporary Error: unexpected response code: 503
I0908 11:11:43.615665  372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d3eb623a-af6e-4cde-9039-2ddcd1c94975] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:11:43 GMT]] Body:0xc0009a55c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017c6a00 TLS:<nil>}
I0908 11:11:43.615748  372718 retry.go:31] will retry after 4.259787307s: Temporary Error: unexpected response code: 503
I0908 11:11:47.879540  372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[bffc825d-021a-45f2-8936-820a3520e916] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:11:47 GMT]] Body:0xc0009a5680 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000255680 TLS:<nil>}
I0908 11:11:47.879622  372718 retry.go:31] will retry after 5.012788371s: Temporary Error: unexpected response code: 503
I0908 11:11:52.898765  372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[84f8f7e6-5caf-4cc4-bccd-cda36aa61aff] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:11:52 GMT]] Body:0xc000944880 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00166ef00 TLS:<nil>}
I0908 11:11:52.898865  372718 retry.go:31] will retry after 4.333629776s: Temporary Error: unexpected response code: 503
I0908 11:11:57.239607  372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f5efddca-f342-47d9-8031-4beb7a6bf657] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:11:57 GMT]] Body:0xc0009a5700 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00166f040 TLS:<nil>}
I0908 11:11:57.239704  372718 retry.go:31] will retry after 9.108883573s: Temporary Error: unexpected response code: 503
I0908 11:12:06.352893  372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d5a0bc37-6337-4b62-bfa0-72ff56b769cc] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:12:06 GMT]] Body:0xc000a3ec00 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017c6b40 TLS:<nil>}
I0908 11:12:06.352991  372718 retry.go:31] will retry after 17.488649229s: Temporary Error: unexpected response code: 503
I0908 11:12:23.848175  372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[acd6d977-3961-4a01-a57e-e7cbafce30cf] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:12:23 GMT]] Body:0xc000a3ec80 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002557c0 TLS:<nil>}
I0908 11:12:23.848262  372718 retry.go:31] will retry after 32.352203899s: Temporary Error: unexpected response code: 503
I0908 11:12:56.204118  372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[da911960-7a47-4c06-8a4b-8bdcfccb20ec] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:12:56 GMT]] Body:0xc000944980 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00166f180 TLS:<nil>}
I0908 11:12:56.204226  372718 retry.go:31] will retry after 48.210112898s: Temporary Error: unexpected response code: 503
I0908 11:13:44.422502  372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[81d43349-564a-4bf2-8eca-1db2f3e93278] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:13:44 GMT]] Body:0xc0009a4ac0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00166e140 TLS:<nil>}
I0908 11:13:44.422603  372718 retry.go:31] will retry after 53.697607322s: Temporary Error: unexpected response code: 503
I0908 11:14:38.124846  372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7062205b-2cbc-4957-827f-7e241b51ef53] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:14:38 GMT]] Body:0xc00057ea00 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000254140 TLS:<nil>}
I0908 11:14:38.124947  372718 retry.go:31] will retry after 36.612596234s: Temporary Error: unexpected response code: 503
I0908 11:15:14.742050  372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[424ca690-f465-4982-9f4c-a2b10aed5fff] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:15:14 GMT]] Body:0xc0009a4b80 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00166f2c0 TLS:<nil>}
I0908 11:15:14.742239  372718 retry.go:31] will retry after 31.829625288s: Temporary Error: unexpected response code: 503
I0908 11:15:46.575870  372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0a6551e3-6f8d-48d8-b2d4-06ff855c7226] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:15:46 GMT]] Body:0xc0009a4ac0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000254280 TLS:<nil>}
I0908 11:15:46.575962  372718 retry.go:31] will retry after 39.064713825s: Temporary Error: unexpected response code: 503
I0908 11:16:25.645419  372718 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[beffe0ba-3072-418c-9e75-b0a53089038f] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 11:16:25 GMT]] Body:0xc000a3e480 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00166e000 TLS:<nil>}
I0908 11:16:25.645512  372718 retry.go:31] will retry after 1m17.567564981s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-799296 -n functional-799296
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-799296 logs -n 25: (1.103945163s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                            ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-799296 ssh -n functional-799296 sudo cat /home/docker/cp-test.txt                                               │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	│ cp             │ functional-799296 cp functional-799296:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2557956061/001/cp-test.txt │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	│ ssh            │ functional-799296 ssh -n functional-799296 sudo cat /home/docker/cp-test.txt                                               │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	│ cp             │ functional-799296 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                  │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	│ ssh            │ functional-799296 ssh -n functional-799296 sudo cat /tmp/does/not/exist/cp-test.txt                                        │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	│ start          │ -p functional-799296 --dry-run --memory 250MB --alsologtostderr --driver=kvm2                                              │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │                     │
	│ start          │ -p functional-799296 --dry-run --alsologtostderr -v=1 --driver=kvm2                                                        │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │                     │
	│ ssh            │ functional-799296 ssh echo hello                                                                                           │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	│ ssh            │ functional-799296 ssh cat /etc/hostname                                                                                    │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	│ dashboard      │ --url --port 36195 -p functional-799296 --alsologtostderr -v=1                                                             │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │                     │
	│ service        │ functional-799296 service list                                                                                             │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	│ service        │ functional-799296 service list -o json                                                                                     │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	│ service        │ functional-799296 service --namespace=default --https --url hello-node                                                     │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	│ service        │ functional-799296 service hello-node --url --format={{.IP}}                                                                │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	│ service        │ functional-799296 service hello-node --url                                                                                 │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	│ image          │ functional-799296 image ls --format short --alsologtostderr                                                                │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	│ image          │ functional-799296 image ls --format yaml --alsologtostderr                                                                 │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	│ ssh            │ functional-799296 ssh pgrep buildkitd                                                                                      │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │                     │
	│ image          │ functional-799296 image build -t localhost/my-image:functional-799296 testdata/build --alsologtostderr                     │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	│ image          │ functional-799296 image ls                                                                                                 │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	│ image          │ functional-799296 image ls --format json --alsologtostderr                                                                 │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	│ image          │ functional-799296 image ls --format table --alsologtostderr                                                                │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	│ update-context │ functional-799296 update-context --alsologtostderr -v=2                                                                    │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	│ update-context │ functional-799296 update-context --alsologtostderr -v=2                                                                    │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	│ update-context │ functional-799296 update-context --alsologtostderr -v=2                                                                    │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 11:11:34
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 11:11:34.770029  372629 out.go:360] Setting OutFile to fd 1 ...
	I0908 11:11:34.770326  372629 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:11:34.770337  372629 out.go:374] Setting ErrFile to fd 2...
	I0908 11:11:34.770343  372629 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:11:34.770556  372629 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-360138/.minikube/bin
	I0908 11:11:34.771311  372629 out.go:368] Setting JSON to false
	I0908 11:11:34.772769  372629 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3240,"bootTime":1757326655,"procs":279,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 11:11:34.772896  372629 start.go:140] virtualization: kvm guest
	I0908 11:11:34.775116  372629 out.go:179] * [functional-799296] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 11:11:34.777118  372629 out.go:179]   - MINIKUBE_LOCATION=21512
	I0908 11:11:34.777144  372629 notify.go:220] Checking for updates...
	I0908 11:11:34.780199  372629 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 11:11:34.781673  372629 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21512-360138/kubeconfig
	I0908 11:11:34.783184  372629 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-360138/.minikube
	I0908 11:11:34.784823  372629 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 11:11:34.786318  372629 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 11:11:34.788394  372629 config.go:182] Loaded profile config "functional-799296": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 11:11:34.788892  372629 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21512-360138/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:11:34.788991  372629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:11:34.806214  372629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40839
	I0908 11:11:34.806900  372629 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:11:34.807525  372629 main.go:141] libmachine: Using API Version  1
	I0908 11:11:34.807543  372629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:11:34.808041  372629 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:11:34.808244  372629 main.go:141] libmachine: (functional-799296) Calling .DriverName
	I0908 11:11:34.808526  372629 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 11:11:34.808858  372629 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21512-360138/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:11:34.808913  372629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:11:34.825386  372629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42697
	I0908 11:11:34.825932  372629 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:11:34.826409  372629 main.go:141] libmachine: Using API Version  1
	I0908 11:11:34.826443  372629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:11:34.826886  372629 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:11:34.827109  372629 main.go:141] libmachine: (functional-799296) Calling .DriverName
	I0908 11:11:34.863939  372629 out.go:179] * Using the kvm2 driver based on existing profile
	I0908 11:11:34.865329  372629 start.go:304] selected driver: kvm2
	I0908 11:11:34.865348  372629 start.go:918] validating driver "kvm2" against &{Name:functional-799296 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.0 ClusterName:functional-799296 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.63 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 11:11:34.865486  372629 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 11:11:34.866509  372629 cni.go:84] Creating CNI manager for ""
	I0908 11:11:34.866565  372629 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0908 11:11:34.866620  372629 start.go:348] cluster config:
	{Name:functional-799296 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-799296 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.63 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 11:11:34.868261  372629 out.go:179] * dry-run validation complete!
	
	
	==> Docker <==
	Sep 08 11:11:52 functional-799296 cri-dockerd[8537]: time="2025-09-08T11:11:52Z" level=info msg="Stop pulling image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: Pulling from kubernetesui/metrics-scraper"
	Sep 08 11:11:52 functional-799296 cri-dockerd[8537]: time="2025-09-08T11:11:52Z" level=error msg="error collecting stats for container 'kube-apiserver': Error response from daemon: No such container: df39d7e3c156915d45768a83a7090185a60ac983d70afb870654c18a67cd5559"
	Sep 08 11:11:53 functional-799296 dockerd[7653]: time="2025-09-08T11:11:53.503203138Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 08 11:11:53 functional-799296 dockerd[7653]: time="2025-09-08T11:11:53.904296453Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 11:12:02 functional-799296 cri-dockerd[8537]: time="2025-09-08T11:12:02Z" level=error msg="error getting RW layer size for container ID 'df39d7e3c156915d45768a83a7090185a60ac983d70afb870654c18a67cd5559': Error response from daemon: No such container: df39d7e3c156915d45768a83a7090185a60ac983d70afb870654c18a67cd5559"
	Sep 08 11:12:02 functional-799296 cri-dockerd[8537]: time="2025-09-08T11:12:02Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'df39d7e3c156915d45768a83a7090185a60ac983d70afb870654c18a67cd5559'"
	Sep 08 11:12:09 functional-799296 dockerd[7653]: time="2025-09-08T11:12:09.184879445Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 11:12:14 functional-799296 dockerd[7653]: time="2025-09-08T11:12:14.163113354Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 11:12:15 functional-799296 dockerd[7653]: time="2025-09-08T11:12:15.505099269Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 08 11:12:15 functional-799296 dockerd[7653]: time="2025-09-08T11:12:15.907556845Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 11:12:23 functional-799296 dockerd[7653]: time="2025-09-08T11:12:23.516199292Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 08 11:12:23 functional-799296 dockerd[7653]: time="2025-09-08T11:12:23.920186221Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 11:12:56 functional-799296 dockerd[7653]: time="2025-09-08T11:12:56.168763802Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 11:12:57 functional-799296 dockerd[7653]: time="2025-09-08T11:12:57.131125362Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 11:13:04 functional-799296 dockerd[7653]: time="2025-09-08T11:13:04.547498819Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 08 11:13:04 functional-799296 dockerd[7653]: time="2025-09-08T11:13:04.953208912Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 11:13:07 functional-799296 dockerd[7653]: time="2025-09-08T11:13:07.500732587Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 08 11:13:07 functional-799296 dockerd[7653]: time="2025-09-08T11:13:07.924322001Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 11:14:22 functional-799296 dockerd[7653]: time="2025-09-08T11:14:22.405899966Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 11:14:22 functional-799296 cri-dockerd[8537]: time="2025-09-08T11:14:22Z" level=info msg="Stop pulling image docker.io/mysql:5.7: 5.7: Pulling from library/mysql"
	Sep 08 11:14:24 functional-799296 dockerd[7653]: time="2025-09-08T11:14:24.129416999Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 11:14:35 functional-799296 dockerd[7653]: time="2025-09-08T11:14:35.505816861Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 08 11:14:35 functional-799296 dockerd[7653]: time="2025-09-08T11:14:35.902715633Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 11:14:38 functional-799296 dockerd[7653]: time="2025-09-08T11:14:38.505499622Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 08 11:14:38 functional-799296 dockerd[7653]: time="2025-09-08T11:14:38.903854978Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	85a061fe28404       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           5 minutes ago       Running             echo-server               0                   5477cbca6c431       hello-node-75c85bcc94-hpxnn
	c0b1dc9ecbc3d       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   5 minutes ago       Exited              mount-munger              0                   23b66ffbe7bcc       busybox-mount
	991dc27df9488       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           5 minutes ago       Running             echo-server               0                   bd82e1912f29b       hello-node-connect-7d85dfc575-z44vz
	46e16f741b0a8       52546a367cc9e                                                                                         5 minutes ago       Running             coredns                   3                   6bc5cdc5ab0c5       coredns-66bc5c9577-jgsmm
	fe4a8982187eb       6e38f40d628db                                                                                         5 minutes ago       Running             storage-provisioner       3                   06d5d5b14338d       storage-provisioner
	2b2df4438da81       df0860106674d                                                                                         5 minutes ago       Running             kube-proxy                3                   8677445b2febe       kube-proxy-4vghz
	0d893b24e3bfe       a0af72f2ec6d6                                                                                         5 minutes ago       Running             kube-controller-manager   3                   e442b985ee5af       kube-controller-manager-functional-799296
	fc8fecb4cc17d       90550c43ad2bc                                                                                         5 minutes ago       Running             kube-apiserver            0                   915b70b583616       kube-apiserver-functional-799296
	8c478ed91b786       5f1f5298c888d                                                                                         5 minutes ago       Running             etcd                      3                   1dadbf1bf582d       etcd-functional-799296
	3e020aa535204       46169d968e920                                                                                         5 minutes ago       Running             kube-scheduler            4                   fa14ee4a8ddc3       kube-scheduler-functional-799296
	98555532e7d99       46169d968e920                                                                                         5 minutes ago       Exited              kube-scheduler            3                   554dbe054cfea       kube-scheduler-functional-799296
	48239ff88be42       a0af72f2ec6d6                                                                                         5 minutes ago       Exited              kube-controller-manager   2                   b84ea368d516e       kube-controller-manager-functional-799296
	a3aded15ab5cd       df0860106674d                                                                                         5 minutes ago       Exited              kube-proxy                2                   bdb66340f0ffa       kube-proxy-4vghz
	8da95a34aad1d       6e38f40d628db                                                                                         6 minutes ago       Exited              storage-provisioner       2                   548e32b88c739       storage-provisioner
	7ebc0a8be557e       52546a367cc9e                                                                                         6 minutes ago       Exited              coredns                   2                   630bd78209478       coredns-66bc5c9577-jgsmm
	28f5c5f342b5a       5f1f5298c888d                                                                                         6 minutes ago       Exited              etcd                      2                   85dc20bb58775       etcd-functional-799296
	
	
	==> coredns [46e16f741b0a] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47197 - 38538 "HINFO IN 2640664162402965986.7445642495243262507. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.344615448s
	
	
	==> coredns [7ebc0a8be557] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40026 - 49666 "HINFO IN 6934977106845304138.5256607027527752237. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.102570572s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-799296
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-799296
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a399eb27affc71ce2737faeeac659fc2ce938c64
	                    minikube.k8s.io/name=functional-799296
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_08T11_08_34_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 11:08:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-799296
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 11:16:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 11:14:40 +0000   Mon, 08 Sep 2025 11:08:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 11:14:40 +0000   Mon, 08 Sep 2025 11:08:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 11:14:40 +0000   Mon, 08 Sep 2025 11:08:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Sep 2025 11:14:40 +0000   Mon, 08 Sep 2025 11:08:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.63
	  Hostname:    functional-799296
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	System Info:
	  Machine ID:                 9b5d811bb77448bf80c1bfb1571c2de4
	  System UUID:                9b5d811b-b774-48bf-80c1-bfb1571c2de4
	  Boot ID:                    07459d39-5da9-4917-ab00-ad155ef2fd22
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-hpxnn                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m9s
	  default                     hello-node-connect-7d85dfc575-z44vz           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m16s
	  default                     mysql-5bb876957f-bm5sk                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    5m4s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m11s
	  kube-system                 coredns-66bc5c9577-jgsmm                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     7m58s
	  kube-system                 etcd-functional-799296                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         8m5s
	  kube-system                 kube-apiserver-functional-799296              250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m39s
	  kube-system                 kube-controller-manager-functional-799296     200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m3s
	  kube-system                 kube-proxy-4vghz                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m58s
	  kube-system                 kube-scheduler-functional-799296              100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m3s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m56s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-vt6p2    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-656tt         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m55s                  kube-proxy       
	  Normal  Starting                 5m37s                  kube-proxy       
	  Normal  Starting                 6m40s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  8m3s                   kubelet          Node functional-799296 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  8m3s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    8m3s                   kubelet          Node functional-799296 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m3s                   kubelet          Node functional-799296 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m3s                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           7m59s                  node-controller  Node functional-799296 event: Registered Node functional-799296 in Controller
	  Normal  NodeReady                7m58s                  kubelet          Node functional-799296 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    6m46s (x8 over 6m46s)  kubelet          Node functional-799296 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  6m46s (x8 over 6m46s)  kubelet          Node functional-799296 status is now: NodeHasSufficientMemory
	  Normal  Starting                 6m46s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     6m46s (x7 over 6m46s)  kubelet          Node functional-799296 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m46s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m39s                  node-controller  Node functional-799296 event: Registered Node functional-799296 in Controller
	  Normal  Starting                 5m44s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m44s (x8 over 5m44s)  kubelet          Node functional-799296 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m44s (x8 over 5m44s)  kubelet          Node functional-799296 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m44s (x7 over 5m44s)  kubelet          Node functional-799296 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m37s                  node-controller  Node functional-799296 event: Registered Node functional-799296 in Controller
	
	
	==> dmesg <==
	[  +0.108325] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.113567] kauditd_printk_skb: 373 callbacks suppressed
	[  +0.099329] kauditd_printk_skb: 205 callbacks suppressed
	[  +0.140732] kauditd_printk_skb: 166 callbacks suppressed
	[  +0.069461] kauditd_printk_skb: 12 callbacks suppressed
	[ +10.835030] kauditd_printk_skb: 273 callbacks suppressed
	[Sep 8 11:09] kauditd_printk_skb: 16 callbacks suppressed
	[ +15.176955] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.502717] kauditd_printk_skb: 28 callbacks suppressed
	[  +0.001483] kauditd_printk_skb: 2 callbacks suppressed
	[  +1.201727] kauditd_printk_skb: 353 callbacks suppressed
	[  +4.818024] kauditd_printk_skb: 169 callbacks suppressed
	[Sep 8 11:10] kauditd_printk_skb: 78 callbacks suppressed
	[  +0.173869] kauditd_printk_skb: 5 callbacks suppressed
	[ +15.172551] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.550566] kauditd_printk_skb: 22 callbacks suppressed
	[  +0.112394] kauditd_printk_skb: 420 callbacks suppressed
	[  +5.476814] kauditd_printk_skb: 102 callbacks suppressed
	[Sep 8 11:11] kauditd_printk_skb: 119 callbacks suppressed
	[  +0.630517] kauditd_printk_skb: 97 callbacks suppressed
	[  +4.573282] kauditd_printk_skb: 105 callbacks suppressed
	[  +3.518957] kauditd_printk_skb: 114 callbacks suppressed
	[  +4.393821] kauditd_printk_skb: 50 callbacks suppressed
	[  +0.000017] kauditd_printk_skb: 80 callbacks suppressed
	[  +1.595612] crun[13335]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	
	
	==> etcd [28f5c5f342b5] <==
	{"level":"warn","ts":"2025-09-08T11:09:53.392632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:09:53.402955Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:09:53.411003Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:09:53.421141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:09:53.431370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:09:53.440393Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:09:53.513378Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42368","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-08T11:10:35.620501Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-08T11:10:35.620714Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-799296","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.63:2380"],"advertise-client-urls":["https://192.168.39.63:2379"]}
	{"level":"error","ts":"2025-09-08T11:10:35.620837Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-08T11:10:42.623529Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-08T11:10:42.623708Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-08T11:10:42.623746Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"365d90f3070fcb7b","current-leader-member-id":"365d90f3070fcb7b"}
	{"level":"info","ts":"2025-09-08T11:10:42.623848Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-08T11:10:42.623860Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-08T11:10:42.627642Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-08T11:10:42.627701Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-08T11:10:42.627716Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-08T11:10:42.627774Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.63:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-08T11:10:42.627784Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.63:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-08T11:10:42.627789Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.63:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-08T11:10:42.630783Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.63:2380"}
	{"level":"error","ts":"2025-09-08T11:10:42.630863Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.63:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-08T11:10:42.630921Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.63:2380"}
	{"level":"info","ts":"2025-09-08T11:10:42.630931Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-799296","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.63:2380"],"advertise-client-urls":["https://192.168.39.63:2379"]}
	
	
	==> etcd [8c478ed91b78] <==
	{"level":"warn","ts":"2025-09-08T11:10:55.371641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.392525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.433382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.434607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.447806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.461112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.481382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.489932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.503570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.511719Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.535779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.545504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.557051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.569520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.581750Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.601578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.607815Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.629943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.641538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.653876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.681534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.687758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.705513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.715377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.816477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44268","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 11:16:36 up 8 min,  0 users,  load average: 0.06, 0.42, 0.31
	Linux functional-799296 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep  4 13:14:36 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [fc8fecb4cc17] <==
	I0908 11:10:56.607698       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0908 11:10:57.362245       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0908 11:10:57.391601       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0908 11:10:58.722374       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0908 11:10:58.814539       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0908 11:10:58.860001       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0908 11:10:58.869048       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0908 11:10:59.950878       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0908 11:11:00.254169       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0908 11:11:00.306747       1 controller.go:667] quota admission added evaluator for: endpoints
	I0908 11:11:15.543162       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.100.203.138"}
	I0908 11:11:20.324262       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.106.206.43"}
	I0908 11:11:28.065971       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.111.26.100"}
	I0908 11:11:32.038141       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.102.137.235"}
	I0908 11:11:36.337658       1 controller.go:667] quota admission added evaluator for: namespaces
	I0908 11:11:36.788168       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.225.130"}
	I0908 11:11:36.838104       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.71.160"}
	I0908 11:12:09.225521       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:12:22.079669       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:13:29.636876       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:13:31.016508       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:14:44.869754       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:14:55.565168       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:16:11.937898       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:16:11.940238       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [0d893b24e3bf] <==
	I0908 11:10:59.964533       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0908 11:10:59.964287       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0908 11:10:59.964643       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0908 11:10:59.967161       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0908 11:10:59.971748       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0908 11:10:59.972999       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0908 11:10:59.975477       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0908 11:10:59.975425       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0908 11:10:59.976862       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0908 11:10:59.977093       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0908 11:10:59.977210       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0908 11:10:59.984290       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0908 11:10:59.990150       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0908 11:10:59.992361       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0908 11:10:59.994798       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0908 11:11:00.001637       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0908 11:11:00.019346       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	E0908 11:11:36.476977       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 11:11:36.497516       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 11:11:36.535118       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 11:11:36.535662       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 11:11:36.549933       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 11:11:36.550659       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 11:11:36.567343       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 11:11:36.567493       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [48239ff88be4] <==
	I0908 11:10:48.950763       1 serving.go:386] Generated self-signed cert in-memory
	I0908 11:10:49.522609       1 controllermanager.go:191] "Starting" version="v1.34.0"
	I0908 11:10:49.522646       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 11:10:49.533303       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I0908 11:10:49.533348       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0908 11:10:49.533296       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0908 11:10:49.533699       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	
	
	==> kube-proxy [2b2df4438da8] <==
	I0908 11:10:58.562390       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0908 11:10:58.664361       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 11:10:58.664481       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.63"]
	E0908 11:10:58.664592       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 11:10:58.781256       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0908 11:10:58.781560       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0908 11:10:58.781685       1 server_linux.go:132] "Using iptables Proxier"
	I0908 11:10:58.797904       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 11:10:58.799903       1 server.go:527] "Version info" version="v1.34.0"
	I0908 11:10:58.800229       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 11:10:58.807635       1 config.go:200] "Starting service config controller"
	I0908 11:10:58.807888       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 11:10:58.807928       1 config.go:106] "Starting endpoint slice config controller"
	I0908 11:10:58.808008       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 11:10:58.808168       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 11:10:58.808230       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 11:10:58.812083       1 config.go:309] "Starting node config controller"
	I0908 11:10:58.812114       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 11:10:58.908081       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0908 11:10:58.908290       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0908 11:10:58.908318       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0908 11:10:58.913283       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-proxy [a3aded15ab5c] <==
	I0908 11:10:48.659680       1 server_linux.go:53] "Using iptables proxy"
	I0908 11:10:48.775257       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E0908 11:10:48.781034       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-799296&limit=500&resourceVersion=0\": dial tcp 192.168.39.63:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0908 11:10:49.776207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-799296&limit=500&resourceVersion=0\": dial tcp 192.168.39.63:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	
	
	==> kube-scheduler [3e020aa53520] <==
	I0908 11:10:56.486742       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0908 11:10:56.486786       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 11:10:56.490541       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 11:10:56.490612       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 11:10:56.491671       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0908 11:10:56.491994       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	E0908 11:10:56.504869       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0908 11:10:56.505009       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0908 11:10:56.508275       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0908 11:10:56.508756       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0908 11:10:56.509216       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0908 11:10:56.509711       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0908 11:10:56.509806       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0908 11:10:56.510369       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0908 11:10:56.510724       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0908 11:10:56.511724       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0908 11:10:56.511990       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0908 11:10:56.512248       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0908 11:10:56.511239       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0908 11:10:56.512796       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0908 11:10:56.513731       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0908 11:10:56.513992       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0908 11:10:56.514034       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0908 11:10:56.514370       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I0908 11:10:56.590779       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [98555532e7d9] <==
	I0908 11:10:49.688822       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kubelet <==
	Sep 08 11:15:10 functional-799296 kubelet[9815]: E0908 11:15:10.301561    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-bm5sk" podUID="94489291-66ea-41b5-9147-83370907abcc"
	Sep 08 11:15:17 functional-799296 kubelet[9815]: E0908 11:15:17.302208    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-656tt" podUID="899616a7-67f1-4a7d-b570-23af938cef3f"
	Sep 08 11:15:18 functional-799296 kubelet[9815]: E0908 11:15:18.298590    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="e057834f-8639-426b-b18b-b92cc9b17156"
	Sep 08 11:15:19 functional-799296 kubelet[9815]: E0908 11:15:19.300211    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vt6p2" podUID="a273d9a7-fb0c-4b8d-983d-5281c9c3e63d"
	Sep 08 11:15:24 functional-799296 kubelet[9815]: E0908 11:15:24.302611    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-bm5sk" podUID="94489291-66ea-41b5-9147-83370907abcc"
	Sep 08 11:15:31 functional-799296 kubelet[9815]: E0908 11:15:31.302405    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vt6p2" podUID="a273d9a7-fb0c-4b8d-983d-5281c9c3e63d"
	Sep 08 11:15:32 functional-799296 kubelet[9815]: E0908 11:15:32.301019    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-656tt" podUID="899616a7-67f1-4a7d-b570-23af938cef3f"
	Sep 08 11:15:33 functional-799296 kubelet[9815]: E0908 11:15:33.298205    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="e057834f-8639-426b-b18b-b92cc9b17156"
	Sep 08 11:15:35 functional-799296 kubelet[9815]: E0908 11:15:35.299841    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-bm5sk" podUID="94489291-66ea-41b5-9147-83370907abcc"
	Sep 08 11:15:45 functional-799296 kubelet[9815]: E0908 11:15:45.301794    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vt6p2" podUID="a273d9a7-fb0c-4b8d-983d-5281c9c3e63d"
	Sep 08 11:15:47 functional-799296 kubelet[9815]: E0908 11:15:47.300342    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-656tt" podUID="899616a7-67f1-4a7d-b570-23af938cef3f"
	Sep 08 11:15:48 functional-799296 kubelet[9815]: E0908 11:15:48.298346    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="e057834f-8639-426b-b18b-b92cc9b17156"
	Sep 08 11:15:50 functional-799296 kubelet[9815]: E0908 11:15:50.301768    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-bm5sk" podUID="94489291-66ea-41b5-9147-83370907abcc"
	Sep 08 11:15:59 functional-799296 kubelet[9815]: E0908 11:15:59.300012    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vt6p2" podUID="a273d9a7-fb0c-4b8d-983d-5281c9c3e63d"
	Sep 08 11:16:00 functional-799296 kubelet[9815]: E0908 11:16:00.299936    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-656tt" podUID="899616a7-67f1-4a7d-b570-23af938cef3f"
	Sep 08 11:16:01 functional-799296 kubelet[9815]: E0908 11:16:01.298372    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="e057834f-8639-426b-b18b-b92cc9b17156"
	Sep 08 11:16:02 functional-799296 kubelet[9815]: E0908 11:16:02.300820    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-bm5sk" podUID="94489291-66ea-41b5-9147-83370907abcc"
	Sep 08 11:16:12 functional-799296 kubelet[9815]: E0908 11:16:12.300521    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-656tt" podUID="899616a7-67f1-4a7d-b570-23af938cef3f"
	Sep 08 11:16:13 functional-799296 kubelet[9815]: E0908 11:16:13.298258    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="e057834f-8639-426b-b18b-b92cc9b17156"
	Sep 08 11:16:13 functional-799296 kubelet[9815]: E0908 11:16:13.303429    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vt6p2" podUID="a273d9a7-fb0c-4b8d-983d-5281c9c3e63d"
	Sep 08 11:16:14 functional-799296 kubelet[9815]: E0908 11:16:14.301564    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-bm5sk" podUID="94489291-66ea-41b5-9147-83370907abcc"
	Sep 08 11:16:24 functional-799296 kubelet[9815]: E0908 11:16:24.305484    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-656tt" podUID="899616a7-67f1-4a7d-b570-23af938cef3f"
	Sep 08 11:16:25 functional-799296 kubelet[9815]: E0908 11:16:25.298405    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="e057834f-8639-426b-b18b-b92cc9b17156"
	Sep 08 11:16:26 functional-799296 kubelet[9815]: E0908 11:16:26.301324    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-bm5sk" podUID="94489291-66ea-41b5-9147-83370907abcc"
	Sep 08 11:16:26 functional-799296 kubelet[9815]: E0908 11:16:26.301706    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vt6p2" podUID="a273d9a7-fb0c-4b8d-983d-5281c9c3e63d"
	
	
	==> storage-provisioner [8da95a34aad1] <==
	I0908 11:10:08.878220       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0908 11:10:08.888633       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0908 11:10:08.888730       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0908 11:10:08.892145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:10:12.349561       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:10:16.610079       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:10:20.209144       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:10:23.264748       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:10:26.287715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:10:26.293629       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0908 11:10:26.293852       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0908 11:10:26.294105       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-799296_8e479d11-d693-40bb-9c1e-b39710a5c181!
	I0908 11:10:26.295179       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3c9d7864-016f-4339-b126-11f104bc2c6b", APIVersion:"v1", ResourceVersion:"556", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-799296_8e479d11-d693-40bb-9c1e-b39710a5c181 became leader
	W0908 11:10:26.304031       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:10:26.307788       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0908 11:10:26.395277       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-799296_8e479d11-d693-40bb-9c1e-b39710a5c181!
	W0908 11:10:28.311332       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:10:28.320750       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:10:30.325001       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:10:30.331391       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:10:32.334794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:10:32.341394       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:10:34.345571       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:10:34.356004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [fe4a8982187e] <==
	W0908 11:16:11.886326       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:16:13.890618       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:16:13.896350       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:16:15.900132       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:16:15.907665       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:16:17.911591       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:16:17.922583       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:16:19.927266       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:16:19.934042       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:16:21.938409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:16:21.950215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:16:23.953921       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:16:23.959957       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:16:25.963836       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:16:25.969755       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:16:27.974500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:16:27.982373       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:16:29.986710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:16:29.997544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:16:32.001819       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:16:32.008527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:16:34.012045       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:16:34.017976       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:16:36.022675       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:16:36.032683       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-799296 -n functional-799296
helpers_test.go:269: (dbg) Run:  kubectl --context functional-799296 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount mysql-5bb876957f-bm5sk sp-pod dashboard-metrics-scraper-77bf4d6c4c-vt6p2 kubernetes-dashboard-855c9754f9-656tt
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-799296 describe pod busybox-mount mysql-5bb876957f-bm5sk sp-pod dashboard-metrics-scraper-77bf4d6c4c-vt6p2 kubernetes-dashboard-855c9754f9-656tt
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-799296 describe pod busybox-mount mysql-5bb876957f-bm5sk sp-pod dashboard-metrics-scraper-77bf4d6c4c-vt6p2 kubernetes-dashboard-855c9754f9-656tt: exit status 1 (103.489184ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-799296/192.168.39.63
	Start Time:       Mon, 08 Sep 2025 11:11:21 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  mount-munger:
	    Container ID:  docker://c0b1dc9ecbc3d940b52991b15e1830ada53642504383ccc7d74239f147ce04e9
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 08 Sep 2025 11:11:24 +0000
	      Finished:     Mon, 08 Sep 2025 11:11:24 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vkd9c (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-vkd9c:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m16s  default-scheduler  Successfully assigned default/busybox-mount to functional-799296
	  Normal  Pulling    5m16s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m13s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.126s (2.702s including waiting). Image size: 4403845 bytes.
	  Normal  Created    5m13s  kubelet            Created container: mount-munger
	  Normal  Started    5m13s  kubelet            Started container mount-munger
	
	
	Name:             mysql-5bb876957f-bm5sk
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-799296/192.168.39.63
	Start Time:       Mon, 08 Sep 2025 11:11:32 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.13
	IPs:
	  IP:           10.244.0.13
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4qc5g (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-4qc5g:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  5m5s                  default-scheduler  Successfully assigned default/mysql-5bb876957f-bm5sk to functional-799296
	  Warning  Failed     3m41s (x4 over 5m4s)  kubelet            Failed to pull image "docker.io/mysql:5.7": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m16s (x5 over 5m5s)  kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     2m15s (x5 over 5m4s)  kubelet            Error: ErrImagePull
	  Warning  Failed     2m15s                 kubelet            Failed to pull image "docker.io/mysql:5.7": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     73s (x15 over 5m4s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    11s (x20 over 5m4s)   kubelet            Back-off pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-799296/192.168.39.63
	Start Time:       Mon, 08 Sep 2025 11:11:25 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cr22d (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-cr22d:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  5m12s                  default-scheduler  Successfully assigned default/sp-pod to functional-799296
	  Normal   Pulling    2m14s (x5 over 5m11s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     2m13s (x5 over 5m10s)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m13s (x5 over 5m10s)  kubelet            Error: ErrImagePull
	  Warning  Failed     79s (x15 over 5m10s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    12s (x20 over 5m10s)   kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-vt6p2" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-656tt" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-799296 describe pod busybox-mount mysql-5bb876957f-bm5sk sp-pod dashboard-metrics-scraper-77bf4d6c4c-vt6p2 kubernetes-dashboard-855c9754f9-656tt: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (301.98s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (368.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [4e06c9af-dc1d-440e-b556-12898ec7ef89] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.005557945s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-799296 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-799296 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-799296 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-799296 apply -f testdata/storage-provisioner/pod.yaml
I0908 11:11:25.980197  364318 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [e057834f-8639-426b-b18b-b92cc9b17156] Pending
helpers_test.go:352: "sp-pod" [e057834f-8639-426b-b18b-b92cc9b17156] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
functional_test_pvc_test.go:140: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 6m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-799296 -n functional-799296
functional_test_pvc_test.go:140: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-09-08 11:17:26.263964916 +0000 UTC m=+1035.369513717
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-799296 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-799296 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-799296/192.168.39.63
Start Time:       Mon, 08 Sep 2025 11:11:25 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.11
IPs:
IP:  10.244.0.11
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ErrImagePull
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cr22d (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-cr22d:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  6m1s                  default-scheduler  Successfully assigned default/sp-pod to functional-799296
Normal   Pulling    3m3s (x5 over 6m)     kubelet            Pulling image "docker.io/nginx"
Warning  Failed     3m2s (x5 over 5m59s)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     3m2s (x5 over 5m59s)  kubelet            Error: ErrImagePull
Normal   BackOff    46s (x21 over 5m59s)  kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     46s (x21 over 5m59s)  kubelet            Error: ImagePullBackOff
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-799296 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) Non-zero exit: kubectl --context functional-799296 logs sp-pod -n default: exit status 1 (77.959588ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: image can't be pulled

                                                
                                                
** /stderr **
functional_test_pvc_test.go:140: kubectl --context functional-799296 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-799296 -n functional-799296
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-799296 logs -n 25: (1.033074384s)
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                            ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-799296 ssh -n functional-799296 sudo cat /home/docker/cp-test.txt                                               │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	│ cp             │ functional-799296 cp functional-799296:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2557956061/001/cp-test.txt │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	│ ssh            │ functional-799296 ssh -n functional-799296 sudo cat /home/docker/cp-test.txt                                               │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	│ cp             │ functional-799296 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                  │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	│ ssh            │ functional-799296 ssh -n functional-799296 sudo cat /tmp/does/not/exist/cp-test.txt                                        │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	│ start          │ -p functional-799296 --dry-run --memory 250MB --alsologtostderr --driver=kvm2                                              │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │                     │
	│ start          │ -p functional-799296 --dry-run --alsologtostderr -v=1 --driver=kvm2                                                        │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │                     │
	│ ssh            │ functional-799296 ssh echo hello                                                                                           │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	│ ssh            │ functional-799296 ssh cat /etc/hostname                                                                                    │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	│ dashboard      │ --url --port 36195 -p functional-799296 --alsologtostderr -v=1                                                             │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │                     │
	│ service        │ functional-799296 service list                                                                                             │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	│ service        │ functional-799296 service list -o json                                                                                     │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	│ service        │ functional-799296 service --namespace=default --https --url hello-node                                                     │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	│ service        │ functional-799296 service hello-node --url --format={{.IP}}                                                                │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	│ service        │ functional-799296 service hello-node --url                                                                                 │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	│ image          │ functional-799296 image ls --format short --alsologtostderr                                                                │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	│ image          │ functional-799296 image ls --format yaml --alsologtostderr                                                                 │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	│ ssh            │ functional-799296 ssh pgrep buildkitd                                                                                      │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │                     │
	│ image          │ functional-799296 image build -t localhost/my-image:functional-799296 testdata/build --alsologtostderr                     │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	│ image          │ functional-799296 image ls                                                                                                 │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	│ image          │ functional-799296 image ls --format json --alsologtostderr                                                                 │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	│ image          │ functional-799296 image ls --format table --alsologtostderr                                                                │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	│ update-context │ functional-799296 update-context --alsologtostderr -v=2                                                                    │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	│ update-context │ functional-799296 update-context --alsologtostderr -v=2                                                                    │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	│ update-context │ functional-799296 update-context --alsologtostderr -v=2                                                                    │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 11:11:34
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 11:11:34.770029  372629 out.go:360] Setting OutFile to fd 1 ...
	I0908 11:11:34.770326  372629 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:11:34.770337  372629 out.go:374] Setting ErrFile to fd 2...
	I0908 11:11:34.770343  372629 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:11:34.770556  372629 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-360138/.minikube/bin
	I0908 11:11:34.771311  372629 out.go:368] Setting JSON to false
	I0908 11:11:34.772769  372629 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3240,"bootTime":1757326655,"procs":279,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 11:11:34.772896  372629 start.go:140] virtualization: kvm guest
	I0908 11:11:34.775116  372629 out.go:179] * [functional-799296] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 11:11:34.777118  372629 out.go:179]   - MINIKUBE_LOCATION=21512
	I0908 11:11:34.777144  372629 notify.go:220] Checking for updates...
	I0908 11:11:34.780199  372629 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 11:11:34.781673  372629 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21512-360138/kubeconfig
	I0908 11:11:34.783184  372629 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-360138/.minikube
	I0908 11:11:34.784823  372629 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 11:11:34.786318  372629 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 11:11:34.788394  372629 config.go:182] Loaded profile config "functional-799296": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 11:11:34.788892  372629 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21512-360138/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:11:34.788991  372629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:11:34.806214  372629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40839
	I0908 11:11:34.806900  372629 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:11:34.807525  372629 main.go:141] libmachine: Using API Version  1
	I0908 11:11:34.807543  372629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:11:34.808041  372629 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:11:34.808244  372629 main.go:141] libmachine: (functional-799296) Calling .DriverName
	I0908 11:11:34.808526  372629 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 11:11:34.808858  372629 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21512-360138/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:11:34.808913  372629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:11:34.825386  372629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42697
	I0908 11:11:34.825932  372629 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:11:34.826409  372629 main.go:141] libmachine: Using API Version  1
	I0908 11:11:34.826443  372629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:11:34.826886  372629 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:11:34.827109  372629 main.go:141] libmachine: (functional-799296) Calling .DriverName
	I0908 11:11:34.863939  372629 out.go:179] * Using the kvm2 driver based on existing profile
	I0908 11:11:34.865329  372629 start.go:304] selected driver: kvm2
	I0908 11:11:34.865348  372629 start.go:918] validating driver "kvm2" against &{Name:functional-799296 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.0 ClusterName:functional-799296 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.63 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 11:11:34.865486  372629 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 11:11:34.866509  372629 cni.go:84] Creating CNI manager for ""
	I0908 11:11:34.866565  372629 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0908 11:11:34.866620  372629 start.go:348] cluster config:
	{Name:functional-799296 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-799296 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.63 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 11:11:34.868261  372629 out.go:179] * dry-run validation complete!
	
	
	==> Docker <==
	Sep 08 11:12:14 functional-799296 dockerd[7653]: time="2025-09-08T11:12:14.163113354Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 11:12:15 functional-799296 dockerd[7653]: time="2025-09-08T11:12:15.505099269Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 08 11:12:15 functional-799296 dockerd[7653]: time="2025-09-08T11:12:15.907556845Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 11:12:23 functional-799296 dockerd[7653]: time="2025-09-08T11:12:23.516199292Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 08 11:12:23 functional-799296 dockerd[7653]: time="2025-09-08T11:12:23.920186221Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 11:12:56 functional-799296 dockerd[7653]: time="2025-09-08T11:12:56.168763802Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 11:12:57 functional-799296 dockerd[7653]: time="2025-09-08T11:12:57.131125362Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 11:13:04 functional-799296 dockerd[7653]: time="2025-09-08T11:13:04.547498819Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 08 11:13:04 functional-799296 dockerd[7653]: time="2025-09-08T11:13:04.953208912Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 11:13:07 functional-799296 dockerd[7653]: time="2025-09-08T11:13:07.500732587Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 08 11:13:07 functional-799296 dockerd[7653]: time="2025-09-08T11:13:07.924322001Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 11:14:22 functional-799296 dockerd[7653]: time="2025-09-08T11:14:22.405899966Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 11:14:22 functional-799296 cri-dockerd[8537]: time="2025-09-08T11:14:22Z" level=info msg="Stop pulling image docker.io/mysql:5.7: 5.7: Pulling from library/mysql"
	Sep 08 11:14:24 functional-799296 dockerd[7653]: time="2025-09-08T11:14:24.129416999Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 11:14:35 functional-799296 dockerd[7653]: time="2025-09-08T11:14:35.505816861Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 08 11:14:35 functional-799296 dockerd[7653]: time="2025-09-08T11:14:35.902715633Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 11:14:38 functional-799296 dockerd[7653]: time="2025-09-08T11:14:38.505499622Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 08 11:14:38 functional-799296 dockerd[7653]: time="2025-09-08T11:14:38.903854978Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 11:17:06 functional-799296 dockerd[7653]: time="2025-09-08T11:17:06.400524250Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 11:17:06 functional-799296 cri-dockerd[8537]: time="2025-09-08T11:17:06Z" level=info msg="Stop pulling image docker.io/nginx:latest: latest: Pulling from library/nginx"
	Sep 08 11:17:07 functional-799296 dockerd[7653]: time="2025-09-08T11:17:07.252102081Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 11:17:18 functional-799296 dockerd[7653]: time="2025-09-08T11:17:18.500534554Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 08 11:17:18 functional-799296 dockerd[7653]: time="2025-09-08T11:17:18.902748891Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 11:17:20 functional-799296 dockerd[7653]: time="2025-09-08T11:17:20.502544662Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 08 11:17:20 functional-799296 dockerd[7653]: time="2025-09-08T11:17:20.903666964Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	85a061fe28404       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           5 minutes ago       Running             echo-server               0                   5477cbca6c431       hello-node-75c85bcc94-hpxnn
	c0b1dc9ecbc3d       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   6 minutes ago       Exited              mount-munger              0                   23b66ffbe7bcc       busybox-mount
	991dc27df9488       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           6 minutes ago       Running             echo-server               0                   bd82e1912f29b       hello-node-connect-7d85dfc575-z44vz
	46e16f741b0a8       52546a367cc9e                                                                                         6 minutes ago       Running             coredns                   3                   6bc5cdc5ab0c5       coredns-66bc5c9577-jgsmm
	fe4a8982187eb       6e38f40d628db                                                                                         6 minutes ago       Running             storage-provisioner       3                   06d5d5b14338d       storage-provisioner
	2b2df4438da81       df0860106674d                                                                                         6 minutes ago       Running             kube-proxy                3                   8677445b2febe       kube-proxy-4vghz
	0d893b24e3bfe       a0af72f2ec6d6                                                                                         6 minutes ago       Running             kube-controller-manager   3                   e442b985ee5af       kube-controller-manager-functional-799296
	fc8fecb4cc17d       90550c43ad2bc                                                                                         6 minutes ago       Running             kube-apiserver            0                   915b70b583616       kube-apiserver-functional-799296
	8c478ed91b786       5f1f5298c888d                                                                                         6 minutes ago       Running             etcd                      3                   1dadbf1bf582d       etcd-functional-799296
	3e020aa535204       46169d968e920                                                                                         6 minutes ago       Running             kube-scheduler            4                   fa14ee4a8ddc3       kube-scheduler-functional-799296
	98555532e7d99       46169d968e920                                                                                         6 minutes ago       Exited              kube-scheduler            3                   554dbe054cfea       kube-scheduler-functional-799296
	48239ff88be42       a0af72f2ec6d6                                                                                         6 minutes ago       Exited              kube-controller-manager   2                   b84ea368d516e       kube-controller-manager-functional-799296
	a3aded15ab5cd       df0860106674d                                                                                         6 minutes ago       Exited              kube-proxy                2                   bdb66340f0ffa       kube-proxy-4vghz
	8da95a34aad1d       6e38f40d628db                                                                                         7 minutes ago       Exited              storage-provisioner       2                   548e32b88c739       storage-provisioner
	7ebc0a8be557e       52546a367cc9e                                                                                         7 minutes ago       Exited              coredns                   2                   630bd78209478       coredns-66bc5c9577-jgsmm
	28f5c5f342b5a       5f1f5298c888d                                                                                         7 minutes ago       Exited              etcd                      2                   85dc20bb58775       etcd-functional-799296
	
	
	==> coredns [46e16f741b0a] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47197 - 38538 "HINFO IN 2640664162402965986.7445642495243262507. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.344615448s
	
	
	==> coredns [7ebc0a8be557] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40026 - 49666 "HINFO IN 6934977106845304138.5256607027527752237. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.102570572s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-799296
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-799296
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a399eb27affc71ce2737faeeac659fc2ce938c64
	                    minikube.k8s.io/name=functional-799296
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_08T11_08_34_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 11:08:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-799296
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 11:17:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 11:14:40 +0000   Mon, 08 Sep 2025 11:08:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 11:14:40 +0000   Mon, 08 Sep 2025 11:08:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 11:14:40 +0000   Mon, 08 Sep 2025 11:08:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Sep 2025 11:14:40 +0000   Mon, 08 Sep 2025 11:08:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.63
	  Hostname:    functional-799296
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	System Info:
	  Machine ID:                 9b5d811bb77448bf80c1bfb1571c2de4
	  System UUID:                9b5d811b-b774-48bf-80c1-bfb1571c2de4
	  Boot ID:                    07459d39-5da9-4917-ab00-ad155ef2fd22
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-hpxnn                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m
	  default                     hello-node-connect-7d85dfc575-z44vz           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m7s
	  default                     mysql-5bb876957f-bm5sk                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    5m55s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  kube-system                 coredns-66bc5c9577-jgsmm                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     8m49s
	  kube-system                 etcd-functional-799296                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         8m56s
	  kube-system                 kube-apiserver-functional-799296              250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m30s
	  kube-system                 kube-controller-manager-functional-799296     200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m54s
	  kube-system                 kube-proxy-4vghz                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m49s
	  kube-system                 kube-scheduler-functional-799296              100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m54s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m47s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-vt6p2    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-656tt         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m46s                  kube-proxy       
	  Normal  Starting                 6m28s                  kube-proxy       
	  Normal  Starting                 7m31s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  8m54s                  kubelet          Node functional-799296 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  8m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    8m54s                  kubelet          Node functional-799296 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m54s                  kubelet          Node functional-799296 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m54s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m50s                  node-controller  Node functional-799296 event: Registered Node functional-799296 in Controller
	  Normal  NodeReady                8m49s                  kubelet          Node functional-799296 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    7m37s (x8 over 7m37s)  kubelet          Node functional-799296 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  7m37s (x8 over 7m37s)  kubelet          Node functional-799296 status is now: NodeHasSufficientMemory
	  Normal  Starting                 7m37s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     7m37s (x7 over 7m37s)  kubelet          Node functional-799296 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m30s                  node-controller  Node functional-799296 event: Registered Node functional-799296 in Controller
	  Normal  Starting                 6m35s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m35s (x8 over 6m35s)  kubelet          Node functional-799296 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m35s (x8 over 6m35s)  kubelet          Node functional-799296 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m35s (x7 over 6m35s)  kubelet          Node functional-799296 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m28s                  node-controller  Node functional-799296 event: Registered Node functional-799296 in Controller
	
	
	==> dmesg <==
	[  +0.108325] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.113567] kauditd_printk_skb: 373 callbacks suppressed
	[  +0.099329] kauditd_printk_skb: 205 callbacks suppressed
	[  +0.140732] kauditd_printk_skb: 166 callbacks suppressed
	[  +0.069461] kauditd_printk_skb: 12 callbacks suppressed
	[ +10.835030] kauditd_printk_skb: 273 callbacks suppressed
	[Sep 8 11:09] kauditd_printk_skb: 16 callbacks suppressed
	[ +15.176955] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.502717] kauditd_printk_skb: 28 callbacks suppressed
	[  +0.001483] kauditd_printk_skb: 2 callbacks suppressed
	[  +1.201727] kauditd_printk_skb: 353 callbacks suppressed
	[  +4.818024] kauditd_printk_skb: 169 callbacks suppressed
	[Sep 8 11:10] kauditd_printk_skb: 78 callbacks suppressed
	[  +0.173869] kauditd_printk_skb: 5 callbacks suppressed
	[ +15.172551] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.550566] kauditd_printk_skb: 22 callbacks suppressed
	[  +0.112394] kauditd_printk_skb: 420 callbacks suppressed
	[  +5.476814] kauditd_printk_skb: 102 callbacks suppressed
	[Sep 8 11:11] kauditd_printk_skb: 119 callbacks suppressed
	[  +0.630517] kauditd_printk_skb: 97 callbacks suppressed
	[  +4.573282] kauditd_printk_skb: 105 callbacks suppressed
	[  +3.518957] kauditd_printk_skb: 114 callbacks suppressed
	[  +4.393821] kauditd_printk_skb: 50 callbacks suppressed
	[  +0.000017] kauditd_printk_skb: 80 callbacks suppressed
	[  +1.595612] crun[13335]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	
	
	==> etcd [28f5c5f342b5] <==
	{"level":"warn","ts":"2025-09-08T11:09:53.392632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:09:53.402955Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:09:53.411003Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:09:53.421141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:09:53.431370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:09:53.440393Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:09:53.513378Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42368","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-08T11:10:35.620501Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-08T11:10:35.620714Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-799296","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.63:2380"],"advertise-client-urls":["https://192.168.39.63:2379"]}
	{"level":"error","ts":"2025-09-08T11:10:35.620837Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-08T11:10:42.623529Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-08T11:10:42.623708Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-08T11:10:42.623746Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"365d90f3070fcb7b","current-leader-member-id":"365d90f3070fcb7b"}
	{"level":"info","ts":"2025-09-08T11:10:42.623848Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-08T11:10:42.623860Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-08T11:10:42.627642Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-08T11:10:42.627701Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-08T11:10:42.627716Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-08T11:10:42.627774Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.63:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-08T11:10:42.627784Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.63:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-08T11:10:42.627789Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.63:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-08T11:10:42.630783Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.63:2380"}
	{"level":"error","ts":"2025-09-08T11:10:42.630863Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.63:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-08T11:10:42.630921Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.63:2380"}
	{"level":"info","ts":"2025-09-08T11:10:42.630931Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-799296","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.63:2380"],"advertise-client-urls":["https://192.168.39.63:2379"]}
	
	
	==> etcd [8c478ed91b78] <==
	{"level":"warn","ts":"2025-09-08T11:10:55.371641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.392525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.433382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.434607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.447806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.461112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.481382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.489932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.503570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.511719Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.535779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.545504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.557051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.569520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.581750Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.601578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.607815Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.629943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.641538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.653876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.681534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.687758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.705513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.715377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.816477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44268","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 11:17:27 up 9 min,  0 users,  load average: 0.06, 0.37, 0.29
	Linux functional-799296 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep  4 13:14:36 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [fc8fecb4cc17] <==
	I0908 11:10:57.362245       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0908 11:10:57.391601       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0908 11:10:58.722374       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0908 11:10:58.814539       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0908 11:10:58.860001       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0908 11:10:58.869048       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0908 11:10:59.950878       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0908 11:11:00.254169       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0908 11:11:00.306747       1 controller.go:667] quota admission added evaluator for: endpoints
	I0908 11:11:15.543162       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.100.203.138"}
	I0908 11:11:20.324262       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.106.206.43"}
	I0908 11:11:28.065971       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.111.26.100"}
	I0908 11:11:32.038141       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.102.137.235"}
	I0908 11:11:36.337658       1 controller.go:667] quota admission added evaluator for: namespaces
	I0908 11:11:36.788168       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.225.130"}
	I0908 11:11:36.838104       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.71.160"}
	I0908 11:12:09.225521       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:12:22.079669       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:13:29.636876       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:13:31.016508       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:14:44.869754       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:14:55.565168       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:16:11.937898       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:16:11.940238       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:17:20.644574       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [0d893b24e3bf] <==
	I0908 11:10:59.964533       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0908 11:10:59.964287       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0908 11:10:59.964643       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0908 11:10:59.967161       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0908 11:10:59.971748       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0908 11:10:59.972999       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0908 11:10:59.975477       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0908 11:10:59.975425       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0908 11:10:59.976862       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0908 11:10:59.977093       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0908 11:10:59.977210       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0908 11:10:59.984290       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0908 11:10:59.990150       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0908 11:10:59.992361       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0908 11:10:59.994798       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0908 11:11:00.001637       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0908 11:11:00.019346       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	E0908 11:11:36.476977       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 11:11:36.497516       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 11:11:36.535118       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 11:11:36.535662       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 11:11:36.549933       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 11:11:36.550659       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 11:11:36.567343       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 11:11:36.567493       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [48239ff88be4] <==
	I0908 11:10:48.950763       1 serving.go:386] Generated self-signed cert in-memory
	I0908 11:10:49.522609       1 controllermanager.go:191] "Starting" version="v1.34.0"
	I0908 11:10:49.522646       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 11:10:49.533303       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I0908 11:10:49.533348       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0908 11:10:49.533296       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0908 11:10:49.533699       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	
	
	==> kube-proxy [2b2df4438da8] <==
	I0908 11:10:58.562390       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0908 11:10:58.664361       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 11:10:58.664481       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.63"]
	E0908 11:10:58.664592       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 11:10:58.781256       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0908 11:10:58.781560       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0908 11:10:58.781685       1 server_linux.go:132] "Using iptables Proxier"
	I0908 11:10:58.797904       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 11:10:58.799903       1 server.go:527] "Version info" version="v1.34.0"
	I0908 11:10:58.800229       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 11:10:58.807635       1 config.go:200] "Starting service config controller"
	I0908 11:10:58.807888       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 11:10:58.807928       1 config.go:106] "Starting endpoint slice config controller"
	I0908 11:10:58.808008       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 11:10:58.808168       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 11:10:58.808230       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 11:10:58.812083       1 config.go:309] "Starting node config controller"
	I0908 11:10:58.812114       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 11:10:58.908081       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0908 11:10:58.908290       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0908 11:10:58.908318       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0908 11:10:58.913283       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-proxy [a3aded15ab5c] <==
	I0908 11:10:48.659680       1 server_linux.go:53] "Using iptables proxy"
	I0908 11:10:48.775257       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E0908 11:10:48.781034       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-799296&limit=500&resourceVersion=0\": dial tcp 192.168.39.63:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0908 11:10:49.776207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-799296&limit=500&resourceVersion=0\": dial tcp 192.168.39.63:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	
	
	==> kube-scheduler [3e020aa53520] <==
	I0908 11:10:56.486742       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0908 11:10:56.486786       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 11:10:56.490541       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 11:10:56.490612       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 11:10:56.491671       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0908 11:10:56.491994       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	E0908 11:10:56.504869       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0908 11:10:56.505009       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0908 11:10:56.508275       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0908 11:10:56.508756       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0908 11:10:56.509216       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0908 11:10:56.509711       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0908 11:10:56.509806       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0908 11:10:56.510369       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0908 11:10:56.510724       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0908 11:10:56.511724       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0908 11:10:56.511990       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0908 11:10:56.512248       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0908 11:10:56.511239       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0908 11:10:56.512796       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0908 11:10:56.513731       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0908 11:10:56.513992       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0908 11:10:56.514034       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0908 11:10:56.514370       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I0908 11:10:56.590779       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [98555532e7d9] <==
	I0908 11:10:49.688822       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kubelet <==
	Sep 08 11:16:41 functional-799296 kubelet[9815]: E0908 11:16:41.300077    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vt6p2" podUID="a273d9a7-fb0c-4b8d-983d-5281c9c3e63d"
	Sep 08 11:16:50 functional-799296 kubelet[9815]: E0908 11:16:50.307600    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-656tt" podUID="899616a7-67f1-4a7d-b570-23af938cef3f"
	Sep 08 11:16:51 functional-799296 kubelet[9815]: E0908 11:16:51.298023    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="e057834f-8639-426b-b18b-b92cc9b17156"
	Sep 08 11:16:52 functional-799296 kubelet[9815]: E0908 11:16:52.307276    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-bm5sk" podUID="94489291-66ea-41b5-9147-83370907abcc"
	Sep 08 11:16:53 functional-799296 kubelet[9815]: E0908 11:16:53.299871    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vt6p2" podUID="a273d9a7-fb0c-4b8d-983d-5281c9c3e63d"
	Sep 08 11:17:03 functional-799296 kubelet[9815]: E0908 11:17:03.300821    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-656tt" podUID="899616a7-67f1-4a7d-b570-23af938cef3f"
	Sep 08 11:17:05 functional-799296 kubelet[9815]: E0908 11:17:05.300699    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vt6p2" podUID="a273d9a7-fb0c-4b8d-983d-5281c9c3e63d"
	Sep 08 11:17:06 functional-799296 kubelet[9815]: E0908 11:17:06.408606    9815 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Sep 08 11:17:06 functional-799296 kubelet[9815]: E0908 11:17:06.408663    9815 kuberuntime_image.go:43] "Failed to pull image" err="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Sep 08 11:17:06 functional-799296 kubelet[9815]: E0908 11:17:06.408919    9815 kuberuntime_manager.go:1449] "Unhandled Error" err="container myfrontend start failed in pod sp-pod_default(e057834f-8639-426b-b18b-b92cc9b17156): ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 08 11:17:06 functional-799296 kubelet[9815]: E0908 11:17:06.408958    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="e057834f-8639-426b-b18b-b92cc9b17156"
	Sep 08 11:17:07 functional-799296 kubelet[9815]: E0908 11:17:07.259073    9815 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Sep 08 11:17:07 functional-799296 kubelet[9815]: E0908 11:17:07.259134    9815 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Sep 08 11:17:07 functional-799296 kubelet[9815]: E0908 11:17:07.259209    9815 kuberuntime_manager.go:1449] "Unhandled Error" err="container mysql start failed in pod mysql-5bb876957f-bm5sk_default(94489291-66ea-41b5-9147-83370907abcc): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 08 11:17:07 functional-799296 kubelet[9815]: E0908 11:17:07.259239    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-bm5sk" podUID="94489291-66ea-41b5-9147-83370907abcc"
	Sep 08 11:17:18 functional-799296 kubelet[9815]: E0908 11:17:18.910272    9815 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 08 11:17:18 functional-799296 kubelet[9815]: E0908 11:17:18.910338    9815 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 08 11:17:18 functional-799296 kubelet[9815]: E0908 11:17:18.910487    9815 kuberuntime_manager.go:1449] "Unhandled Error" err="container kubernetes-dashboard start failed in pod kubernetes-dashboard-855c9754f9-656tt_kubernetes-dashboard(899616a7-67f1-4a7d-b570-23af938cef3f): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 08 11:17:18 functional-799296 kubelet[9815]: E0908 11:17:18.910520    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-656tt" podUID="899616a7-67f1-4a7d-b570-23af938cef3f"
	Sep 08 11:17:19 functional-799296 kubelet[9815]: E0908 11:17:19.297762    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="e057834f-8639-426b-b18b-b92cc9b17156"
	Sep 08 11:17:20 functional-799296 kubelet[9815]: E0908 11:17:20.910720    9815 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 08 11:17:20 functional-799296 kubelet[9815]: E0908 11:17:20.910768    9815 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 08 11:17:20 functional-799296 kubelet[9815]: E0908 11:17:20.910872    9815 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-77bf4d6c4c-vt6p2_kubernetes-dashboard(a273d9a7-fb0c-4b8d-983d-5281c9c3e63d): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 08 11:17:20 functional-799296 kubelet[9815]: E0908 11:17:20.910911    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vt6p2" podUID="a273d9a7-fb0c-4b8d-983d-5281c9c3e63d"
	Sep 08 11:17:21 functional-799296 kubelet[9815]: E0908 11:17:21.300644    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-bm5sk" podUID="94489291-66ea-41b5-9147-83370907abcc"
	
	
	==> storage-provisioner [8da95a34aad1] <==
	I0908 11:10:08.878220       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0908 11:10:08.888633       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0908 11:10:08.888730       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0908 11:10:08.892145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:10:12.349561       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:10:16.610079       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:10:20.209144       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:10:23.264748       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:10:26.287715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:10:26.293629       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0908 11:10:26.293852       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0908 11:10:26.294105       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-799296_8e479d11-d693-40bb-9c1e-b39710a5c181!
	I0908 11:10:26.295179       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3c9d7864-016f-4339-b126-11f104bc2c6b", APIVersion:"v1", ResourceVersion:"556", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-799296_8e479d11-d693-40bb-9c1e-b39710a5c181 became leader
	W0908 11:10:26.304031       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:10:26.307788       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0908 11:10:26.395277       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-799296_8e479d11-d693-40bb-9c1e-b39710a5c181!
	W0908 11:10:28.311332       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:10:28.320750       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:10:30.325001       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:10:30.331391       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:10:32.334794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:10:32.341394       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:10:34.345571       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:10:34.356004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [fe4a8982187e] <==
	W0908 11:17:02.195770       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:17:04.199738       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:17:04.206364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:17:06.209972       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:17:06.219339       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:17:08.222944       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:17:08.228620       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:17:10.232634       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:17:10.239222       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:17:12.242517       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:17:12.252353       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:17:14.256724       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:17:14.263986       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:17:16.267981       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:17:16.273859       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:17:18.276839       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:17:18.283886       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:17:20.288029       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:17:20.298769       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:17:22.306176       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:17:22.312622       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:17:24.316109       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:17:24.322254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:17:26.325865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:17:26.332256       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-799296 -n functional-799296
helpers_test.go:269: (dbg) Run:  kubectl --context functional-799296 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount mysql-5bb876957f-bm5sk sp-pod dashboard-metrics-scraper-77bf4d6c4c-vt6p2 kubernetes-dashboard-855c9754f9-656tt
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-799296 describe pod busybox-mount mysql-5bb876957f-bm5sk sp-pod dashboard-metrics-scraper-77bf4d6c4c-vt6p2 kubernetes-dashboard-855c9754f9-656tt
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-799296 describe pod busybox-mount mysql-5bb876957f-bm5sk sp-pod dashboard-metrics-scraper-77bf4d6c4c-vt6p2 kubernetes-dashboard-855c9754f9-656tt: exit status 1 (96.904899ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-799296/192.168.39.63
	Start Time:       Mon, 08 Sep 2025 11:11:21 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  mount-munger:
	    Container ID:  docker://c0b1dc9ecbc3d940b52991b15e1830ada53642504383ccc7d74239f147ce04e9
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 08 Sep 2025 11:11:24 +0000
	      Finished:     Mon, 08 Sep 2025 11:11:24 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vkd9c (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-vkd9c:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  6m7s  default-scheduler  Successfully assigned default/busybox-mount to functional-799296
	  Normal  Pulling    6m7s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     6m4s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.126s (2.702s including waiting). Image size: 4403845 bytes.
	  Normal  Created    6m4s  kubelet            Created container: mount-munger
	  Normal  Started    6m4s  kubelet            Started container mount-munger
	
	
	Name:             mysql-5bb876957f-bm5sk
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-799296/192.168.39.63
	Start Time:       Mon, 08 Sep 2025 11:11:32 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.13
	IPs:
	  IP:           10.244.0.13
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4qc5g (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-4qc5g:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  5m56s                  default-scheduler  Successfully assigned default/mysql-5bb876957f-bm5sk to functional-799296
	  Warning  Failed     4m32s (x4 over 5m55s)  kubelet            Failed to pull image "docker.io/mysql:5.7": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m7s (x5 over 5m56s)   kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     3m6s (x5 over 5m55s)   kubelet            Error: ErrImagePull
	  Warning  Failed     3m6s                   kubelet            Failed to pull image "docker.io/mysql:5.7": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    48s (x21 over 5m55s)   kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     48s (x21 over 5m55s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-799296/192.168.39.63
	Start Time:       Mon, 08 Sep 2025 11:11:25 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cr22d (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-cr22d:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m3s                 default-scheduler  Successfully assigned default/sp-pod to functional-799296
	  Normal   Pulling    3m5s (x5 over 6m2s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     3m4s (x5 over 6m1s)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m4s (x5 over 6m1s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    48s (x21 over 6m1s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     48s (x21 over 6m1s)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-vt6p2" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-656tt" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-799296 describe pod busybox-mount mysql-5bb876957f-bm5sk sp-pod dashboard-metrics-scraper-77bf4d6c4c-vt6p2 kubernetes-dashboard-855c9754f9-656tt: exit status 1
E0908 11:19:04.602886  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/addons-733032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (368.63s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (602.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-799296 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-bm5sk" [94489291-66ea-41b5-9147-83370907abcc] Pending
helpers_test.go:352: "mysql-5bb876957f-bm5sk" [94489291-66ea-41b5-9147-83370907abcc] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:337: TestFunctional/parallel/MySQL: WARNING: pod list for "default" "app=mysql" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1804: ***** TestFunctional/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1804: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-799296 -n functional-799296
functional_test.go:1804: TestFunctional/parallel/MySQL: showing logs for failed pods as of 2025-09-08 11:21:32.384408602 +0000 UTC m=+1281.489957413
functional_test.go:1804: (dbg) Run:  kubectl --context functional-799296 describe po mysql-5bb876957f-bm5sk -n default
functional_test.go:1804: (dbg) kubectl --context functional-799296 describe po mysql-5bb876957f-bm5sk -n default:
Name:             mysql-5bb876957f-bm5sk
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-799296/192.168.39.63
Start Time:       Mon, 08 Sep 2025 11:11:32 +0000
Labels:           app=mysql
pod-template-hash=5bb876957f
Annotations:      <none>
Status:           Pending
IP:               10.244.0.13
IPs:
IP:           10.244.0.13
Controlled By:  ReplicaSet/mysql-5bb876957f
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4qc5g (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-4qc5g:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/mysql-5bb876957f-bm5sk to functional-799296
Warning  Failed     8m36s (x4 over 9m59s)   kubelet            Failed to pull image "docker.io/mysql:5.7": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    7m11s (x5 over 10m)     kubelet            Pulling image "docker.io/mysql:5.7"
Warning  Failed     7m10s (x5 over 9m59s)   kubelet            Error: ErrImagePull
Warning  Failed     7m10s                   kubelet            Failed to pull image "docker.io/mysql:5.7": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    4m52s (x21 over 9m59s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
Warning  Failed     4m52s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff
functional_test.go:1804: (dbg) Run:  kubectl --context functional-799296 logs mysql-5bb876957f-bm5sk -n default
functional_test.go:1804: (dbg) Non-zero exit: kubectl --context functional-799296 logs mysql-5bb876957f-bm5sk -n default: exit status 1 (72.105843ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-5bb876957f-bm5sk" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1804: kubectl --context functional-799296 logs mysql-5bb876957f-bm5sk -n default: exit status 1
functional_test.go:1806: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/MySQL]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-799296 -n functional-799296
helpers_test.go:252: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-799296 logs -n 25: (1.035971027s)
helpers_test.go:260: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                            ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-799296 ssh -n functional-799296 sudo cat /home/docker/cp-test.txt                                               │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	│ cp             │ functional-799296 cp functional-799296:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2557956061/001/cp-test.txt │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	│ ssh            │ functional-799296 ssh -n functional-799296 sudo cat /home/docker/cp-test.txt                                               │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	│ cp             │ functional-799296 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                  │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	│ ssh            │ functional-799296 ssh -n functional-799296 sudo cat /tmp/does/not/exist/cp-test.txt                                        │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	│ start          │ -p functional-799296 --dry-run --memory 250MB --alsologtostderr --driver=kvm2                                              │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │                     │
	│ start          │ -p functional-799296 --dry-run --alsologtostderr -v=1 --driver=kvm2                                                        │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │                     │
	│ ssh            │ functional-799296 ssh echo hello                                                                                           │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	│ ssh            │ functional-799296 ssh cat /etc/hostname                                                                                    │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	│ dashboard      │ --url --port 36195 -p functional-799296 --alsologtostderr -v=1                                                             │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │                     │
	│ service        │ functional-799296 service list                                                                                             │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	│ service        │ functional-799296 service list -o json                                                                                     │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	│ service        │ functional-799296 service --namespace=default --https --url hello-node                                                     │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	│ service        │ functional-799296 service hello-node --url --format={{.IP}}                                                                │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	│ service        │ functional-799296 service hello-node --url                                                                                 │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	│ image          │ functional-799296 image ls --format short --alsologtostderr                                                                │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	│ image          │ functional-799296 image ls --format yaml --alsologtostderr                                                                 │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	│ ssh            │ functional-799296 ssh pgrep buildkitd                                                                                      │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │                     │
	│ image          │ functional-799296 image build -t localhost/my-image:functional-799296 testdata/build --alsologtostderr                     │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	│ image          │ functional-799296 image ls                                                                                                 │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	│ image          │ functional-799296 image ls --format json --alsologtostderr                                                                 │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	│ image          │ functional-799296 image ls --format table --alsologtostderr                                                                │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	│ update-context │ functional-799296 update-context --alsologtostderr -v=2                                                                    │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	│ update-context │ functional-799296 update-context --alsologtostderr -v=2                                                                    │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	│ update-context │ functional-799296 update-context --alsologtostderr -v=2                                                                    │ functional-799296 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 11:11:34
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 11:11:34.770029  372629 out.go:360] Setting OutFile to fd 1 ...
	I0908 11:11:34.770326  372629 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:11:34.770337  372629 out.go:374] Setting ErrFile to fd 2...
	I0908 11:11:34.770343  372629 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:11:34.770556  372629 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-360138/.minikube/bin
	I0908 11:11:34.771311  372629 out.go:368] Setting JSON to false
	I0908 11:11:34.772769  372629 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3240,"bootTime":1757326655,"procs":279,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 11:11:34.772896  372629 start.go:140] virtualization: kvm guest
	I0908 11:11:34.775116  372629 out.go:179] * [functional-799296] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 11:11:34.777118  372629 out.go:179]   - MINIKUBE_LOCATION=21512
	I0908 11:11:34.777144  372629 notify.go:220] Checking for updates...
	I0908 11:11:34.780199  372629 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 11:11:34.781673  372629 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21512-360138/kubeconfig
	I0908 11:11:34.783184  372629 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-360138/.minikube
	I0908 11:11:34.784823  372629 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 11:11:34.786318  372629 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 11:11:34.788394  372629 config.go:182] Loaded profile config "functional-799296": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 11:11:34.788892  372629 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21512-360138/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:11:34.788991  372629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:11:34.806214  372629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40839
	I0908 11:11:34.806900  372629 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:11:34.807525  372629 main.go:141] libmachine: Using API Version  1
	I0908 11:11:34.807543  372629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:11:34.808041  372629 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:11:34.808244  372629 main.go:141] libmachine: (functional-799296) Calling .DriverName
	I0908 11:11:34.808526  372629 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 11:11:34.808858  372629 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21512-360138/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:11:34.808913  372629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:11:34.825386  372629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42697
	I0908 11:11:34.825932  372629 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:11:34.826409  372629 main.go:141] libmachine: Using API Version  1
	I0908 11:11:34.826443  372629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:11:34.826886  372629 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:11:34.827109  372629 main.go:141] libmachine: (functional-799296) Calling .DriverName
	I0908 11:11:34.863939  372629 out.go:179] * Using the kvm2 driver based on existing profile
	I0908 11:11:34.865329  372629 start.go:304] selected driver: kvm2
	I0908 11:11:34.865348  372629 start.go:918] validating driver "kvm2" against &{Name:functional-799296 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.0 ClusterName:functional-799296 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.63 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 11:11:34.865486  372629 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 11:11:34.866509  372629 cni.go:84] Creating CNI manager for ""
	I0908 11:11:34.866565  372629 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0908 11:11:34.866620  372629 start.go:348] cluster config:
	{Name:functional-799296 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-799296 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.63 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 11:11:34.868261  372629 out.go:179] * dry-run validation complete!
	
	
	==> Docker <==
	Sep 08 11:12:14 functional-799296 dockerd[7653]: time="2025-09-08T11:12:14.163113354Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 11:12:15 functional-799296 dockerd[7653]: time="2025-09-08T11:12:15.505099269Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 08 11:12:15 functional-799296 dockerd[7653]: time="2025-09-08T11:12:15.907556845Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 11:12:23 functional-799296 dockerd[7653]: time="2025-09-08T11:12:23.516199292Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 08 11:12:23 functional-799296 dockerd[7653]: time="2025-09-08T11:12:23.920186221Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 11:12:56 functional-799296 dockerd[7653]: time="2025-09-08T11:12:56.168763802Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 11:12:57 functional-799296 dockerd[7653]: time="2025-09-08T11:12:57.131125362Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 11:13:04 functional-799296 dockerd[7653]: time="2025-09-08T11:13:04.547498819Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 08 11:13:04 functional-799296 dockerd[7653]: time="2025-09-08T11:13:04.953208912Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 11:13:07 functional-799296 dockerd[7653]: time="2025-09-08T11:13:07.500732587Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 08 11:13:07 functional-799296 dockerd[7653]: time="2025-09-08T11:13:07.924322001Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 11:14:22 functional-799296 dockerd[7653]: time="2025-09-08T11:14:22.405899966Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 11:14:22 functional-799296 cri-dockerd[8537]: time="2025-09-08T11:14:22Z" level=info msg="Stop pulling image docker.io/mysql:5.7: 5.7: Pulling from library/mysql"
	Sep 08 11:14:24 functional-799296 dockerd[7653]: time="2025-09-08T11:14:24.129416999Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 11:14:35 functional-799296 dockerd[7653]: time="2025-09-08T11:14:35.505816861Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 08 11:14:35 functional-799296 dockerd[7653]: time="2025-09-08T11:14:35.902715633Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 11:14:38 functional-799296 dockerd[7653]: time="2025-09-08T11:14:38.505499622Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 08 11:14:38 functional-799296 dockerd[7653]: time="2025-09-08T11:14:38.903854978Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 11:17:06 functional-799296 dockerd[7653]: time="2025-09-08T11:17:06.400524250Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 11:17:06 functional-799296 cri-dockerd[8537]: time="2025-09-08T11:17:06Z" level=info msg="Stop pulling image docker.io/nginx:latest: latest: Pulling from library/nginx"
	Sep 08 11:17:07 functional-799296 dockerd[7653]: time="2025-09-08T11:17:07.252102081Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 11:17:18 functional-799296 dockerd[7653]: time="2025-09-08T11:17:18.500534554Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 08 11:17:18 functional-799296 dockerd[7653]: time="2025-09-08T11:17:18.902748891Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 08 11:17:20 functional-799296 dockerd[7653]: time="2025-09-08T11:17:20.502544662Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 08 11:17:20 functional-799296 dockerd[7653]: time="2025-09-08T11:17:20.903666964Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	85a061fe28404       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           10 minutes ago      Running             echo-server               0                   5477cbca6c431       hello-node-75c85bcc94-hpxnn
	c0b1dc9ecbc3d       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   10 minutes ago      Exited              mount-munger              0                   23b66ffbe7bcc       busybox-mount
	991dc27df9488       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           10 minutes ago      Running             echo-server               0                   bd82e1912f29b       hello-node-connect-7d85dfc575-z44vz
	46e16f741b0a8       52546a367cc9e                                                                                         10 minutes ago      Running             coredns                   3                   6bc5cdc5ab0c5       coredns-66bc5c9577-jgsmm
	fe4a8982187eb       6e38f40d628db                                                                                         10 minutes ago      Running             storage-provisioner       3                   06d5d5b14338d       storage-provisioner
	2b2df4438da81       df0860106674d                                                                                         10 minutes ago      Running             kube-proxy                3                   8677445b2febe       kube-proxy-4vghz
	0d893b24e3bfe       a0af72f2ec6d6                                                                                         10 minutes ago      Running             kube-controller-manager   3                   e442b985ee5af       kube-controller-manager-functional-799296
	fc8fecb4cc17d       90550c43ad2bc                                                                                         10 minutes ago      Running             kube-apiserver            0                   915b70b583616       kube-apiserver-functional-799296
	8c478ed91b786       5f1f5298c888d                                                                                         10 minutes ago      Running             etcd                      3                   1dadbf1bf582d       etcd-functional-799296
	3e020aa535204       46169d968e920                                                                                         10 minutes ago      Running             kube-scheduler            4                   fa14ee4a8ddc3       kube-scheduler-functional-799296
	98555532e7d99       46169d968e920                                                                                         10 minutes ago      Exited              kube-scheduler            3                   554dbe054cfea       kube-scheduler-functional-799296
	48239ff88be42       a0af72f2ec6d6                                                                                         10 minutes ago      Exited              kube-controller-manager   2                   b84ea368d516e       kube-controller-manager-functional-799296
	a3aded15ab5cd       df0860106674d                                                                                         10 minutes ago      Exited              kube-proxy                2                   bdb66340f0ffa       kube-proxy-4vghz
	8da95a34aad1d       6e38f40d628db                                                                                         11 minutes ago      Exited              storage-provisioner       2                   548e32b88c739       storage-provisioner
	7ebc0a8be557e       52546a367cc9e                                                                                         11 minutes ago      Exited              coredns                   2                   630bd78209478       coredns-66bc5c9577-jgsmm
	28f5c5f342b5a       5f1f5298c888d                                                                                         11 minutes ago      Exited              etcd                      2                   85dc20bb58775       etcd-functional-799296
	
	
	==> coredns [46e16f741b0a] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47197 - 38538 "HINFO IN 2640664162402965986.7445642495243262507. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.344615448s
	
	
	==> coredns [7ebc0a8be557] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40026 - 49666 "HINFO IN 6934977106845304138.5256607027527752237. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.102570572s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-799296
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-799296
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a399eb27affc71ce2737faeeac659fc2ce938c64
	                    minikube.k8s.io/name=functional-799296
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_08T11_08_34_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 11:08:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-799296
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 11:21:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 11:19:47 +0000   Mon, 08 Sep 2025 11:08:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 11:19:47 +0000   Mon, 08 Sep 2025 11:08:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 11:19:47 +0000   Mon, 08 Sep 2025 11:08:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Sep 2025 11:19:47 +0000   Mon, 08 Sep 2025 11:08:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.63
	  Hostname:    functional-799296
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	System Info:
	  Machine ID:                 9b5d811bb77448bf80c1bfb1571c2de4
	  System UUID:                9b5d811b-b774-48bf-80c1-bfb1571c2de4
	  Boot ID:                    07459d39-5da9-4917-ab00-ad155ef2fd22
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-hpxnn                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-z44vz           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-bm5sk                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-jgsmm                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-functional-799296                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         13m
	  kube-system                 kube-apiserver-functional-799296              250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-799296     200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-4vghz                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-799296              100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-vt6p2    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m57s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-656tt         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 11m                kube-proxy       
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node functional-799296 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node functional-799296 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet          Node functional-799296 status is now: NodeHasSufficientPID
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           12m                node-controller  Node functional-799296 event: Registered Node functional-799296 in Controller
	  Normal  NodeReady                12m                kubelet          Node functional-799296 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-799296 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-799296 status is now: NodeHasSufficientMemory
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node functional-799296 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           11m                node-controller  Node functional-799296 event: Registered Node functional-799296 in Controller
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-799296 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-799296 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node functional-799296 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10m                node-controller  Node functional-799296 event: Registered Node functional-799296 in Controller
	
	
	==> dmesg <==
	[  +0.108325] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.113567] kauditd_printk_skb: 373 callbacks suppressed
	[  +0.099329] kauditd_printk_skb: 205 callbacks suppressed
	[  +0.140732] kauditd_printk_skb: 166 callbacks suppressed
	[  +0.069461] kauditd_printk_skb: 12 callbacks suppressed
	[ +10.835030] kauditd_printk_skb: 273 callbacks suppressed
	[Sep 8 11:09] kauditd_printk_skb: 16 callbacks suppressed
	[ +15.176955] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.502717] kauditd_printk_skb: 28 callbacks suppressed
	[  +0.001483] kauditd_printk_skb: 2 callbacks suppressed
	[  +1.201727] kauditd_printk_skb: 353 callbacks suppressed
	[  +4.818024] kauditd_printk_skb: 169 callbacks suppressed
	[Sep 8 11:10] kauditd_printk_skb: 78 callbacks suppressed
	[  +0.173869] kauditd_printk_skb: 5 callbacks suppressed
	[ +15.172551] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.550566] kauditd_printk_skb: 22 callbacks suppressed
	[  +0.112394] kauditd_printk_skb: 420 callbacks suppressed
	[  +5.476814] kauditd_printk_skb: 102 callbacks suppressed
	[Sep 8 11:11] kauditd_printk_skb: 119 callbacks suppressed
	[  +0.630517] kauditd_printk_skb: 97 callbacks suppressed
	[  +4.573282] kauditd_printk_skb: 105 callbacks suppressed
	[  +3.518957] kauditd_printk_skb: 114 callbacks suppressed
	[  +4.393821] kauditd_printk_skb: 50 callbacks suppressed
	[  +0.000017] kauditd_printk_skb: 80 callbacks suppressed
	[  +1.595612] crun[13335]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	
	
	==> etcd [28f5c5f342b5] <==
	{"level":"warn","ts":"2025-09-08T11:09:53.392632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:09:53.402955Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:09:53.411003Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:09:53.421141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:09:53.431370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:09:53.440393Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:09:53.513378Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42368","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-08T11:10:35.620501Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-08T11:10:35.620714Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-799296","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.63:2380"],"advertise-client-urls":["https://192.168.39.63:2379"]}
	{"level":"error","ts":"2025-09-08T11:10:35.620837Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-08T11:10:42.623529Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-08T11:10:42.623708Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-08T11:10:42.623746Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"365d90f3070fcb7b","current-leader-member-id":"365d90f3070fcb7b"}
	{"level":"info","ts":"2025-09-08T11:10:42.623848Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-08T11:10:42.623860Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-08T11:10:42.627642Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-08T11:10:42.627701Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-08T11:10:42.627716Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-08T11:10:42.627774Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.63:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-08T11:10:42.627784Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.63:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-08T11:10:42.627789Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.63:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-08T11:10:42.630783Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.63:2380"}
	{"level":"error","ts":"2025-09-08T11:10:42.630863Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.63:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-08T11:10:42.630921Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.63:2380"}
	{"level":"info","ts":"2025-09-08T11:10:42.630931Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-799296","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.63:2380"],"advertise-client-urls":["https://192.168.39.63:2379"]}
	
	
	==> etcd [8c478ed91b78] <==
	{"level":"warn","ts":"2025-09-08T11:10:55.434607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.447806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.461112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.481382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.489932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.503570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.511719Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.535779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.545504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.557051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.569520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.581750Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.601578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.607815Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.629943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.641538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.653876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.681534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.687758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.705513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.715377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T11:10:55.816477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44268","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-08T11:20:54.603405Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1297}
	{"level":"info","ts":"2025-09-08T11:20:54.630602Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1297,"took":"26.161391ms","hash":3164267488,"current-db-size-bytes":3784704,"current-db-size":"3.8 MB","current-db-size-in-use-bytes":1892352,"current-db-size-in-use":"1.9 MB"}
	{"level":"info","ts":"2025-09-08T11:20:54.630652Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3164267488,"revision":1297,"compact-revision":-1}
	
	
	==> kernel <==
	 11:21:33 up 13 min,  0 users,  load average: 0.23, 0.25, 0.25
	Linux functional-799296 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep  4 13:14:36 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [fc8fecb4cc17] <==
	I0908 11:11:00.306747       1 controller.go:667] quota admission added evaluator for: endpoints
	I0908 11:11:15.543162       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.100.203.138"}
	I0908 11:11:20.324262       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.106.206.43"}
	I0908 11:11:28.065971       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.111.26.100"}
	I0908 11:11:32.038141       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.102.137.235"}
	I0908 11:11:36.337658       1 controller.go:667] quota admission added evaluator for: namespaces
	I0908 11:11:36.788168       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.225.130"}
	I0908 11:11:36.838104       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.71.160"}
	I0908 11:12:09.225521       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:12:22.079669       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:13:29.636876       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:13:31.016508       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:14:44.869754       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:14:55.565168       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:16:11.937898       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:16:11.940238       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:17:20.644574       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:17:31.364159       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:18:45.368853       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:18:47.121883       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:19:48.093616       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:19:53.927966       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:20:53.571539       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 11:20:56.494354       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0908 11:21:06.145115       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [0d893b24e3bf] <==
	I0908 11:10:59.964533       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0908 11:10:59.964287       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0908 11:10:59.964643       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0908 11:10:59.967161       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0908 11:10:59.971748       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0908 11:10:59.972999       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0908 11:10:59.975477       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0908 11:10:59.975425       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0908 11:10:59.976862       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0908 11:10:59.977093       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0908 11:10:59.977210       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0908 11:10:59.984290       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0908 11:10:59.990150       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0908 11:10:59.992361       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0908 11:10:59.994798       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0908 11:11:00.001637       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0908 11:11:00.019346       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	E0908 11:11:36.476977       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 11:11:36.497516       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 11:11:36.535118       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 11:11:36.535662       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 11:11:36.549933       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 11:11:36.550659       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 11:11:36.567343       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 11:11:36.567493       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [48239ff88be4] <==
	I0908 11:10:48.950763       1 serving.go:386] Generated self-signed cert in-memory
	I0908 11:10:49.522609       1 controllermanager.go:191] "Starting" version="v1.34.0"
	I0908 11:10:49.522646       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 11:10:49.533303       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I0908 11:10:49.533348       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0908 11:10:49.533296       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0908 11:10:49.533699       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	
	
	==> kube-proxy [2b2df4438da8] <==
	I0908 11:10:58.562390       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0908 11:10:58.664361       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 11:10:58.664481       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.63"]
	E0908 11:10:58.664592       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 11:10:58.781256       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0908 11:10:58.781560       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0908 11:10:58.781685       1 server_linux.go:132] "Using iptables Proxier"
	I0908 11:10:58.797904       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 11:10:58.799903       1 server.go:527] "Version info" version="v1.34.0"
	I0908 11:10:58.800229       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 11:10:58.807635       1 config.go:200] "Starting service config controller"
	I0908 11:10:58.807888       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 11:10:58.807928       1 config.go:106] "Starting endpoint slice config controller"
	I0908 11:10:58.808008       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 11:10:58.808168       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 11:10:58.808230       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 11:10:58.812083       1 config.go:309] "Starting node config controller"
	I0908 11:10:58.812114       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 11:10:58.908081       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0908 11:10:58.908290       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0908 11:10:58.908318       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0908 11:10:58.913283       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-proxy [a3aded15ab5c] <==
	I0908 11:10:48.659680       1 server_linux.go:53] "Using iptables proxy"
	I0908 11:10:48.775257       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E0908 11:10:48.781034       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-799296&limit=500&resourceVersion=0\": dial tcp 192.168.39.63:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0908 11:10:49.776207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-799296&limit=500&resourceVersion=0\": dial tcp 192.168.39.63:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	
	
	==> kube-scheduler [3e020aa53520] <==
	I0908 11:10:56.486742       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0908 11:10:56.486786       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 11:10:56.490541       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 11:10:56.490612       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 11:10:56.491671       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0908 11:10:56.491994       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	E0908 11:10:56.504869       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0908 11:10:56.505009       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0908 11:10:56.508275       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0908 11:10:56.508756       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0908 11:10:56.509216       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0908 11:10:56.509711       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0908 11:10:56.509806       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0908 11:10:56.510369       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0908 11:10:56.510724       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0908 11:10:56.511724       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0908 11:10:56.511990       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0908 11:10:56.512248       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0908 11:10:56.511239       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0908 11:10:56.512796       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0908 11:10:56.513731       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0908 11:10:56.513992       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0908 11:10:56.514034       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0908 11:10:56.514370       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I0908 11:10:56.590779       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [98555532e7d9] <==
	I0908 11:10:49.688822       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kubelet <==
	Sep 08 11:20:17 functional-799296 kubelet[9815]: E0908 11:20:17.298121    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="e057834f-8639-426b-b18b-b92cc9b17156"
	Sep 08 11:20:17 functional-799296 kubelet[9815]: E0908 11:20:17.301649    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-bm5sk" podUID="94489291-66ea-41b5-9147-83370907abcc"
	Sep 08 11:20:20 functional-799296 kubelet[9815]: E0908 11:20:20.307534    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vt6p2" podUID="a273d9a7-fb0c-4b8d-983d-5281c9c3e63d"
	Sep 08 11:20:25 functional-799296 kubelet[9815]: E0908 11:20:25.299789    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-656tt" podUID="899616a7-67f1-4a7d-b570-23af938cef3f"
	Sep 08 11:20:31 functional-799296 kubelet[9815]: E0908 11:20:31.297766    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="e057834f-8639-426b-b18b-b92cc9b17156"
	Sep 08 11:20:32 functional-799296 kubelet[9815]: E0908 11:20:32.302064    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-bm5sk" podUID="94489291-66ea-41b5-9147-83370907abcc"
	Sep 08 11:20:34 functional-799296 kubelet[9815]: E0908 11:20:34.301832    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vt6p2" podUID="a273d9a7-fb0c-4b8d-983d-5281c9c3e63d"
	Sep 08 11:20:38 functional-799296 kubelet[9815]: E0908 11:20:38.302155    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-656tt" podUID="899616a7-67f1-4a7d-b570-23af938cef3f"
	Sep 08 11:20:45 functional-799296 kubelet[9815]: E0908 11:20:45.299418    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-bm5sk" podUID="94489291-66ea-41b5-9147-83370907abcc"
	Sep 08 11:20:46 functional-799296 kubelet[9815]: E0908 11:20:46.298536    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="e057834f-8639-426b-b18b-b92cc9b17156"
	Sep 08 11:20:47 functional-799296 kubelet[9815]: E0908 11:20:47.300589    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vt6p2" podUID="a273d9a7-fb0c-4b8d-983d-5281c9c3e63d"
	Sep 08 11:20:51 functional-799296 kubelet[9815]: E0908 11:20:51.299565    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-656tt" podUID="899616a7-67f1-4a7d-b570-23af938cef3f"
	Sep 08 11:20:56 functional-799296 kubelet[9815]: E0908 11:20:56.302590    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-bm5sk" podUID="94489291-66ea-41b5-9147-83370907abcc"
	Sep 08 11:20:59 functional-799296 kubelet[9815]: E0908 11:20:59.298139    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="e057834f-8639-426b-b18b-b92cc9b17156"
	Sep 08 11:21:00 functional-799296 kubelet[9815]: E0908 11:21:00.303801    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vt6p2" podUID="a273d9a7-fb0c-4b8d-983d-5281c9c3e63d"
	Sep 08 11:21:04 functional-799296 kubelet[9815]: E0908 11:21:04.304494    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-656tt" podUID="899616a7-67f1-4a7d-b570-23af938cef3f"
	Sep 08 11:21:07 functional-799296 kubelet[9815]: E0908 11:21:07.299735    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-bm5sk" podUID="94489291-66ea-41b5-9147-83370907abcc"
	Sep 08 11:21:13 functional-799296 kubelet[9815]: E0908 11:21:13.297959    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="e057834f-8639-426b-b18b-b92cc9b17156"
	Sep 08 11:21:15 functional-799296 kubelet[9815]: E0908 11:21:15.301405    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-656tt" podUID="899616a7-67f1-4a7d-b570-23af938cef3f"
	Sep 08 11:21:15 functional-799296 kubelet[9815]: E0908 11:21:15.302550    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vt6p2" podUID="a273d9a7-fb0c-4b8d-983d-5281c9c3e63d"
	Sep 08 11:21:18 functional-799296 kubelet[9815]: E0908 11:21:18.301131    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-bm5sk" podUID="94489291-66ea-41b5-9147-83370907abcc"
	Sep 08 11:21:25 functional-799296 kubelet[9815]: E0908 11:21:25.298003    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="e057834f-8639-426b-b18b-b92cc9b17156"
	Sep 08 11:21:26 functional-799296 kubelet[9815]: E0908 11:21:26.300567    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vt6p2" podUID="a273d9a7-fb0c-4b8d-983d-5281c9c3e63d"
	Sep 08 11:21:30 functional-799296 kubelet[9815]: E0908 11:21:30.300843    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-bm5sk" podUID="94489291-66ea-41b5-9147-83370907abcc"
	Sep 08 11:21:30 functional-799296 kubelet[9815]: E0908 11:21:30.300872    9815 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-656tt" podUID="899616a7-67f1-4a7d-b570-23af938cef3f"
	
	
	==> storage-provisioner [8da95a34aad1] <==
	I0908 11:10:08.878220       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0908 11:10:08.888633       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0908 11:10:08.888730       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0908 11:10:08.892145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:10:12.349561       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:10:16.610079       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:10:20.209144       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:10:23.264748       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:10:26.287715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:10:26.293629       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0908 11:10:26.293852       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0908 11:10:26.294105       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-799296_8e479d11-d693-40bb-9c1e-b39710a5c181!
	I0908 11:10:26.295179       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3c9d7864-016f-4339-b126-11f104bc2c6b", APIVersion:"v1", ResourceVersion:"556", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-799296_8e479d11-d693-40bb-9c1e-b39710a5c181 became leader
	W0908 11:10:26.304031       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:10:26.307788       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0908 11:10:26.395277       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-799296_8e479d11-d693-40bb-9c1e-b39710a5c181!
	W0908 11:10:28.311332       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:10:28.320750       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:10:30.325001       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:10:30.331391       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:10:32.334794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:10:32.341394       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:10:34.345571       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:10:34.356004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [fe4a8982187e] <==
	W0908 11:21:09.591918       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:21:11.595353       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:21:11.602302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:21:13.605844       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:21:13.612055       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:21:15.615769       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:21:15.623940       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:21:17.627666       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:21:17.632983       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:21:19.638202       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:21:19.645494       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:21:21.651854       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:21:21.661296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:21:23.668189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:21:23.676285       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:21:25.680627       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:21:25.692583       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:21:27.696918       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:21:27.703379       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:21:29.707571       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:21:29.715931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:21:31.719524       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:21:31.725601       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:21:33.731250       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 11:21:33.738922       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-799296 -n functional-799296
helpers_test.go:269: (dbg) Run:  kubectl --context functional-799296 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount mysql-5bb876957f-bm5sk sp-pod dashboard-metrics-scraper-77bf4d6c4c-vt6p2 kubernetes-dashboard-855c9754f9-656tt
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/MySQL]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-799296 describe pod busybox-mount mysql-5bb876957f-bm5sk sp-pod dashboard-metrics-scraper-77bf4d6c4c-vt6p2 kubernetes-dashboard-855c9754f9-656tt
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-799296 describe pod busybox-mount mysql-5bb876957f-bm5sk sp-pod dashboard-metrics-scraper-77bf4d6c4c-vt6p2 kubernetes-dashboard-855c9754f9-656tt: exit status 1 (87.413189ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-799296/192.168.39.63
	Start Time:       Mon, 08 Sep 2025 11:11:21 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  mount-munger:
	    Container ID:  docker://c0b1dc9ecbc3d940b52991b15e1830ada53642504383ccc7d74239f147ce04e9
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 08 Sep 2025 11:11:24 +0000
	      Finished:     Mon, 08 Sep 2025 11:11:24 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vkd9c (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-vkd9c:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-799296
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.126s (2.702s including waiting). Image size: 4403845 bytes.
	  Normal  Created    10m   kubelet            Created container: mount-munger
	  Normal  Started    10m   kubelet            Started container mount-munger
	
	
	Name:             mysql-5bb876957f-bm5sk
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-799296/192.168.39.63
	Start Time:       Mon, 08 Sep 2025 11:11:32 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.13
	IPs:
	  IP:           10.244.0.13
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4qc5g (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-4qc5g:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/mysql-5bb876957f-bm5sk to functional-799296
	  Warning  Failed     8m38s (x4 over 10m)   kubelet            Failed to pull image "docker.io/mysql:5.7": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    7m13s (x5 over 10m)   kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     7m12s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m12s                 kubelet            Failed to pull image "docker.io/mysql:5.7": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    4m54s (x21 over 10m)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     4m54s (x21 over 10m)  kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-799296/192.168.39.63
	Start Time:       Mon, 08 Sep 2025 11:11:25 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cr22d (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-cr22d:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/sp-pod to functional-799296
	  Normal   Pulling    7m11s (x5 over 10m)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     7m10s (x5 over 10m)   kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m10s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m54s (x21 over 10m)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     4m54s (x21 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-vt6p2" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-656tt" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-799296 describe pod busybox-mount mysql-5bb876957f-bm5sk sp-pod dashboard-metrics-scraper-77bf4d6c4c-vt6p2 kubernetes-dashboard-855c9754f9-656tt: exit status 1
--- FAIL: TestFunctional/parallel/MySQL (602.40s)

                                                
                                    

Test pass (308/345)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 8.58
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.16
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.0/json-events 3.66
13 TestDownloadOnly/v1.34.0/preload-exists 0
17 TestDownloadOnly/v1.34.0/LogsDuration 0.07
18 TestDownloadOnly/v1.34.0/DeleteAll 0.15
19 TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.67
22 TestOffline 116.98
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 219.63
29 TestAddons/serial/Volcano 45.1
31 TestAddons/serial/GCPAuth/Namespaces 0.13
32 TestAddons/serial/GCPAuth/FakeCredentials 10.6
35 TestAddons/parallel/Registry 23.07
36 TestAddons/parallel/RegistryCreds 0.69
37 TestAddons/parallel/Ingress 24.02
38 TestAddons/parallel/InspektorGadget 6.19
39 TestAddons/parallel/MetricsServer 7.5
41 TestAddons/parallel/CSI 35.48
42 TestAddons/parallel/Headlamp 29.77
43 TestAddons/parallel/CloudSpanner 5.52
44 TestAddons/parallel/LocalPath 64.07
45 TestAddons/parallel/NvidiaDevicePlugin 6.8
46 TestAddons/parallel/Yakd 12.04
48 TestAddons/StoppedEnableDisable 12.65
49 TestCertOptions 66.97
50 TestCertExpiration 323.18
51 TestDockerFlags 65.09
52 TestForceSystemdFlag 54.53
53 TestForceSystemdEnv 86.25
55 TestKVMDriverInstallOrUpdate 2.3
59 TestErrorSpam/setup 50.45
60 TestErrorSpam/start 0.4
61 TestErrorSpam/status 0.85
62 TestErrorSpam/pause 1.46
63 TestErrorSpam/unpause 1.78
64 TestErrorSpam/stop 15.64
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 89.61
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 56.46
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.09
75 TestFunctional/serial/CacheCmd/cache/add_remote 2.56
76 TestFunctional/serial/CacheCmd/cache/add_local 1.38
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
78 TestFunctional/serial/CacheCmd/cache/list 0.06
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.25
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.28
81 TestFunctional/serial/CacheCmd/cache/delete 0.11
82 TestFunctional/serial/MinikubeKubectlCmd 0.13
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
84 TestFunctional/serial/ExtraConfig 56.02
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.07
87 TestFunctional/serial/LogsFileCmd 1.1
88 TestFunctional/serial/InvalidService 4.22
90 TestFunctional/parallel/ConfigCmd 0.38
92 TestFunctional/parallel/DryRun 0.3
93 TestFunctional/parallel/InternationalLanguage 0.16
94 TestFunctional/parallel/StatusCmd 0.98
98 TestFunctional/parallel/ServiceCmdConnect 9.52
99 TestFunctional/parallel/AddonsCmd 0.15
102 TestFunctional/parallel/SSHCmd 0.41
103 TestFunctional/parallel/CpCmd 1.3
105 TestFunctional/parallel/FileSync 0.22
106 TestFunctional/parallel/CertSync 1.4
110 TestFunctional/parallel/NodeLabels 0.07
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.24
114 TestFunctional/parallel/License 0.24
117 TestFunctional/parallel/MountCmd/any-port 8.65
125 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
126 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
127 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
128 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
129 TestFunctional/parallel/ImageCommands/ImageBuild 4.3
130 TestFunctional/parallel/ImageCommands/Setup 1.57
131 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.16
132 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.78
133 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.56
134 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.37
135 TestFunctional/parallel/ImageCommands/ImageRemove 0.44
136 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.75
137 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.49
138 TestFunctional/parallel/DockerEnv/bash 0.88
139 TestFunctional/parallel/ServiceCmd/DeployApp 8.2
140 TestFunctional/parallel/MountCmd/specific-port 1.7
141 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
142 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
143 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
144 TestFunctional/parallel/ProfileCmd/profile_not_create 0.41
145 TestFunctional/parallel/ProfileCmd/profile_list 0.44
146 TestFunctional/parallel/MountCmd/VerifyCleanup 1.35
147 TestFunctional/parallel/ProfileCmd/profile_json_output 0.37
148 TestFunctional/parallel/ServiceCmd/List 0.52
149 TestFunctional/parallel/ServiceCmd/JSONOutput 1.3
150 TestFunctional/parallel/ServiceCmd/HTTPS 0.36
151 TestFunctional/parallel/ServiceCmd/Format 0.31
152 TestFunctional/parallel/ServiceCmd/URL 0.32
153 TestFunctional/parallel/Version/short 0.06
154 TestFunctional/parallel/Version/components 0.47
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
159 TestGvisorAddon 242.89
162 TestMultiControlPlane/serial/StartCluster 242
163 TestMultiControlPlane/serial/DeployApp 6.53
164 TestMultiControlPlane/serial/PingHostFromPods 1.34
165 TestMultiControlPlane/serial/AddWorkerNode 55.51
166 TestMultiControlPlane/serial/NodeLabels 0.07
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.94
168 TestMultiControlPlane/serial/CopyFile 14.13
169 TestMultiControlPlane/serial/StopSecondaryNode 13.03
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.73
171 TestMultiControlPlane/serial/RestartSecondaryNode 29.46
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.12
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 167.24
174 TestMultiControlPlane/serial/DeleteSecondaryNode 7.83
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.69
176 TestMultiControlPlane/serial/StopCluster 36.38
177 TestMultiControlPlane/serial/RestartCluster 134.09
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.71
179 TestMultiControlPlane/serial/AddSecondaryNode 97.77
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.97
183 TestImageBuild/serial/Setup 51.33
184 TestImageBuild/serial/NormalBuild 1.69
185 TestImageBuild/serial/BuildWithBuildArg 1.07
186 TestImageBuild/serial/BuildWithDockerIgnore 0.72
187 TestImageBuild/serial/BuildWithSpecifiedDockerfile 1.1
191 TestJSONOutput/start/Command 95.53
192 TestJSONOutput/start/Audit 0
194 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/pause/Command 0.69
198 TestJSONOutput/pause/Audit 0
200 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/unpause/Command 0.65
204 TestJSONOutput/unpause/Audit 0
206 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
209 TestJSONOutput/stop/Command 12.45
210 TestJSONOutput/stop/Audit 0
212 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
213 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
214 TestErrorJSONOutput 0.23
219 TestMainNoArgs 0.06
220 TestMinikubeProfile 106.59
223 TestMountStart/serial/StartWithMountFirst 29.37
224 TestMountStart/serial/VerifyMountFirst 0.41
225 TestMountStart/serial/StartWithMountSecond 30.18
226 TestMountStart/serial/VerifyMountSecond 0.4
227 TestMountStart/serial/DeleteFirst 0.71
228 TestMountStart/serial/VerifyMountPostDelete 0.4
229 TestMountStart/serial/Stop 1.4
230 TestMountStart/serial/RestartStopped 24.61
231 TestMountStart/serial/VerifyMountPostStop 0.41
234 TestMultiNode/serial/FreshStart2Nodes 133.15
235 TestMultiNode/serial/DeployApp2Nodes 5.48
236 TestMultiNode/serial/PingHostFrom2Pods 0.93
237 TestMultiNode/serial/AddNode 55
238 TestMultiNode/serial/MultiNodeLabels 0.07
239 TestMultiNode/serial/ProfileList 0.65
240 TestMultiNode/serial/CopyFile 8.09
241 TestMultiNode/serial/StopNode 3.28
242 TestMultiNode/serial/StartAfterStop 39.74
243 TestMultiNode/serial/RestartKeepsNodes 178.92
244 TestMultiNode/serial/DeleteNode 2.46
245 TestMultiNode/serial/StopMultiNode 24.06
246 TestMultiNode/serial/RestartMultiNode 132.76
247 TestMultiNode/serial/ValidateNameConflict 53.08
252 TestPreload 159.39
254 TestScheduledStopUnix 121.94
255 TestSkaffold 136.61
258 TestRunningBinaryUpgrade 182.17
260 TestKubernetesUpgrade 254.78
273 TestStoppedBinaryUpgrade/Setup 0.66
274 TestStoppedBinaryUpgrade/Upgrade 157.85
276 TestPause/serial/Start 117.6
285 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
286 TestNoKubernetes/serial/StartWithK8s 73.91
287 TestStoppedBinaryUpgrade/MinikubeLogs 1.3
288 TestPause/serial/SecondStartNoReconfiguration 75.44
289 TestNoKubernetes/serial/StartWithStopK8s 43.99
290 TestNoKubernetes/serial/Start 30.28
291 TestPause/serial/Pause 0.67
292 TestPause/serial/VerifyStatus 0.28
293 TestPause/serial/Unpause 0.64
294 TestPause/serial/PauseAgain 0.85
295 TestPause/serial/DeletePaused 1.05
296 TestPause/serial/VerifyDeletedResources 0.49
297 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
298 TestNoKubernetes/serial/ProfileList 1.14
299 TestNoKubernetes/serial/Stop 1.45
300 TestNoKubernetes/serial/StartNoArgs 74.91
301 TestNetworkPlugins/group/auto/Start 121.98
302 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.24
303 TestNetworkPlugins/group/kindnet/Start 105.66
304 TestNetworkPlugins/group/calico/Start 141.41
305 TestNetworkPlugins/group/auto/KubeletFlags 0.27
306 TestNetworkPlugins/group/auto/NetCatPod 11.35
307 TestNetworkPlugins/group/auto/DNS 0.17
308 TestNetworkPlugins/group/auto/Localhost 0.15
309 TestNetworkPlugins/group/auto/HairPin 0.18
310 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
311 TestNetworkPlugins/group/kindnet/KubeletFlags 0.27
312 TestNetworkPlugins/group/kindnet/NetCatPod 12.32
313 TestNetworkPlugins/group/custom-flannel/Start 74.2
314 TestNetworkPlugins/group/kindnet/DNS 0.22
315 TestNetworkPlugins/group/kindnet/Localhost 0.19
316 TestNetworkPlugins/group/kindnet/HairPin 0.17
317 TestNetworkPlugins/group/false/Start 121.71
318 TestNetworkPlugins/group/enable-default-cni/Start 132.25
319 TestNetworkPlugins/group/calico/ControllerPod 6.01
320 TestNetworkPlugins/group/calico/KubeletFlags 0.24
321 TestNetworkPlugins/group/calico/NetCatPod 11.29
322 TestNetworkPlugins/group/calico/DNS 0.21
323 TestNetworkPlugins/group/calico/Localhost 0.18
324 TestNetworkPlugins/group/calico/HairPin 0.2
325 TestNetworkPlugins/group/flannel/Start 107.36
326 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.24
327 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.27
328 TestNetworkPlugins/group/custom-flannel/DNS 0.23
329 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
330 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
331 TestNetworkPlugins/group/bridge/Start 110.5
332 TestNetworkPlugins/group/false/KubeletFlags 0.23
333 TestNetworkPlugins/group/false/NetCatPod 12.3
334 TestNetworkPlugins/group/false/DNS 0.22
335 TestNetworkPlugins/group/false/Localhost 0.21
336 TestNetworkPlugins/group/false/HairPin 0.18
337 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.26
338 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.3
339 TestNetworkPlugins/group/kubenet/Start 103.42
340 TestNetworkPlugins/group/flannel/ControllerPod 6.01
341 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
342 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
343 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
344 TestNetworkPlugins/group/flannel/KubeletFlags 0.23
345 TestNetworkPlugins/group/flannel/NetCatPod 12.28
346 TestNetworkPlugins/group/flannel/DNS 0.17
347 TestNetworkPlugins/group/flannel/Localhost 0.13
348 TestNetworkPlugins/group/flannel/HairPin 0.14
350 TestStartStop/group/old-k8s-version/serial/FirstStart 122.92
352 TestStartStop/group/no-preload/serial/FirstStart 141.07
353 TestNetworkPlugins/group/bridge/KubeletFlags 0.25
354 TestNetworkPlugins/group/bridge/NetCatPod 11.28
355 TestNetworkPlugins/group/bridge/DNS 0.22
356 TestNetworkPlugins/group/bridge/Localhost 0.19
357 TestNetworkPlugins/group/bridge/HairPin 0.22
359 TestStartStop/group/embed-certs/serial/FirstStart 107.38
360 TestNetworkPlugins/group/kubenet/KubeletFlags 0.3
361 TestNetworkPlugins/group/kubenet/NetCatPod 12.33
362 TestNetworkPlugins/group/kubenet/DNS 0.17
363 TestNetworkPlugins/group/kubenet/Localhost 0.14
364 TestNetworkPlugins/group/kubenet/HairPin 0.15
366 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 99.05
367 TestStartStop/group/old-k8s-version/serial/DeployApp 9.41
368 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.26
369 TestStartStop/group/old-k8s-version/serial/Stop 12.63
370 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.39
371 TestStartStop/group/old-k8s-version/serial/SecondStart 46.57
372 TestStartStop/group/no-preload/serial/DeployApp 9.39
373 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.18
374 TestStartStop/group/embed-certs/serial/DeployApp 9.4
375 TestStartStop/group/no-preload/serial/Stop 12.49
376 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.07
377 TestStartStop/group/embed-certs/serial/Stop 12.43
378 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.24
379 TestStartStop/group/no-preload/serial/SecondStart 55.16
380 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.29
381 TestStartStop/group/embed-certs/serial/SecondStart 65.69
382 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 14.01
383 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
384 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.29
385 TestStartStop/group/old-k8s-version/serial/Pause 3.77
386 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.37
387 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 4.95
389 TestStartStop/group/newest-cni/serial/FirstStart 62.53
390 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.59
391 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 9
392 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.26
393 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 63.88
394 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
395 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
396 TestStartStop/group/no-preload/serial/Pause 3.2
397 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.15
398 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
399 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
400 TestStartStop/group/embed-certs/serial/Pause 2.81
401 TestStartStop/group/newest-cni/serial/DeployApp 0
402 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.05
403 TestStartStop/group/newest-cni/serial/Stop 12.44
404 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.24
405 TestStartStop/group/newest-cni/serial/SecondStart 36.1
406 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 9
407 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
408 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.22
409 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.72
410 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
411 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
412 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
413 TestStartStop/group/newest-cni/serial/Pause 2.65
x
+
TestDownloadOnly/v1.28.0/json-events (8.58s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-158390 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-158390 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=kvm2 : (8.578566638s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (8.58s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0908 11:00:19.515241  364318 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime docker
I0908 11:00:19.515356  364318 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21512-360138/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-158390
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-158390: exit status 85 (74.87355ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                      ARGS                                                                       │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-158390 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=kvm2 │ download-only-158390 │ jenkins │ v1.36.0 │ 08 Sep 25 11:00 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 11:00:10
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 11:00:10.983102  364330 out.go:360] Setting OutFile to fd 1 ...
	I0908 11:00:10.983357  364330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:00:10.983368  364330 out.go:374] Setting ErrFile to fd 2...
	I0908 11:00:10.983372  364330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:00:10.983631  364330 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-360138/.minikube/bin
	W0908 11:00:10.983819  364330 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21512-360138/.minikube/config/config.json: open /home/jenkins/minikube-integration/21512-360138/.minikube/config/config.json: no such file or directory
	I0908 11:00:10.984474  364330 out.go:368] Setting JSON to true
	I0908 11:00:10.985543  364330 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":2556,"bootTime":1757326655,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 11:00:10.985610  364330 start.go:140] virtualization: kvm guest
	I0908 11:00:10.988196  364330 out.go:99] [download-only-158390] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	W0908 11:00:10.988379  364330 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/21512-360138/.minikube/cache/preloaded-tarball: no such file or directory
	I0908 11:00:10.988416  364330 notify.go:220] Checking for updates...
	I0908 11:00:10.990114  364330 out.go:171] MINIKUBE_LOCATION=21512
	I0908 11:00:10.991913  364330 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 11:00:10.993613  364330 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21512-360138/kubeconfig
	I0908 11:00:10.995106  364330 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-360138/.minikube
	I0908 11:00:10.996571  364330 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0908 11:00:10.998941  364330 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0908 11:00:10.999207  364330 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 11:00:11.034733  364330 out.go:99] Using the kvm2 driver based on user configuration
	I0908 11:00:11.034775  364330 start.go:304] selected driver: kvm2
	I0908 11:00:11.034784  364330 start.go:918] validating driver "kvm2" against <nil>
	I0908 11:00:11.035180  364330 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 11:00:11.035276  364330 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21512-360138/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	W0908 11:00:11.041050  364330 install.go:62] docker-machine-driver-kvm2: exit status 1
	I0908 11:00:11.043136  364330 out.go:99] Downloading driver docker-machine-driver-kvm2:
	I0908 11:00:11.043287  364330 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.36.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.36.0/docker-machine-driver-kvm2-amd64.sha256 -> /home/jenkins/minikube-integration/21512-360138/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:00:11.668066  364330 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0908 11:00:11.668723  364330 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32089MB, container=0MB
	I0908 11:00:11.668929  364330 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0908 11:00:11.668975  364330 cni.go:84] Creating CNI manager for ""
	I0908 11:00:11.669029  364330 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0908 11:00:11.669042  364330 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0908 11:00:11.669127  364330 start.go:348] cluster config:
	{Name:download-only-158390 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-158390 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 11:00:11.669343  364330 iso.go:125] acquiring lock: {Name:mk6be6b0ae3230e7cb322a20efc2e9908291607f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 11:00:11.671602  364330 out.go:99] Downloading VM boot image ...
	I0908 11:00:11.671660  364330 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21512-360138/.minikube/cache/iso/amd64/minikube-v1.36.0-1756980912-21488-amd64.iso
	I0908 11:00:15.524126  364330 out.go:99] Starting "download-only-158390" primary control-plane node in "download-only-158390" cluster
	I0908 11:00:15.524171  364330 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0908 11:00:15.543787  364330 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	I0908 11:00:15.543836  364330 cache.go:58] Caching tarball of preloaded images
	I0908 11:00:15.544026  364330 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0908 11:00:15.546032  364330 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I0908 11:00:15.546066  364330 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4 ...
	I0908 11:00:15.575206  364330 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4?checksum=md5:8a955be835827bc584bcce0658a7fcc9 -> /home/jenkins/minikube-integration/21512-360138/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-158390 host does not exist
	  To start a cluster, run: "minikube start -p download-only-158390"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-158390
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/json-events (3.66s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-735872 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-735872 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=docker --driver=kvm2 : (3.664156023s)
--- PASS: TestDownloadOnly/v1.34.0/json-events (3.66s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/preload-exists
I0908 11:00:23.564356  364318 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
I0908 11:00:23.564410  364318 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21512-360138/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-735872
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-735872: exit status 85 (71.548713ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-158390 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=kvm2 │ download-only-158390 │ jenkins │ v1.36.0 │ 08 Sep 25 11:00 UTC │                     │
	│ delete  │ --all                                                                                                                                           │ minikube             │ jenkins │ v1.36.0 │ 08 Sep 25 11:00 UTC │ 08 Sep 25 11:00 UTC │
	│ delete  │ -p download-only-158390                                                                                                                         │ download-only-158390 │ jenkins │ v1.36.0 │ 08 Sep 25 11:00 UTC │ 08 Sep 25 11:00 UTC │
	│ start   │ -o=json --download-only -p download-only-735872 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=docker --driver=kvm2 │ download-only-735872 │ jenkins │ v1.36.0 │ 08 Sep 25 11:00 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 11:00:19
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 11:00:19.945970  364522 out.go:360] Setting OutFile to fd 1 ...
	I0908 11:00:19.946282  364522 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:00:19.946295  364522 out.go:374] Setting ErrFile to fd 2...
	I0908 11:00:19.946299  364522 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:00:19.946493  364522 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-360138/.minikube/bin
	I0908 11:00:19.947181  364522 out.go:368] Setting JSON to true
	I0908 11:00:19.948153  364522 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":2565,"bootTime":1757326655,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 11:00:19.948224  364522 start.go:140] virtualization: kvm guest
	I0908 11:00:19.950214  364522 out.go:99] [download-only-735872] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 11:00:19.950456  364522 notify.go:220] Checking for updates...
	I0908 11:00:19.951991  364522 out.go:171] MINIKUBE_LOCATION=21512
	I0908 11:00:19.953501  364522 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 11:00:19.954907  364522 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21512-360138/kubeconfig
	I0908 11:00:19.956281  364522 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-360138/.minikube
	I0908 11:00:19.957716  364522 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-735872 host does not exist
	  To start a cluster, run: "minikube start -p download-only-735872"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-735872
--- PASS: TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.67s)

                                                
                                                
=== RUN   TestBinaryMirror
I0908 11:00:24.227823  364318 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-180281 --alsologtostderr --binary-mirror http://127.0.0.1:41719 --driver=kvm2 
helpers_test.go:175: Cleaning up "binary-mirror-180281" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-180281
--- PASS: TestBinaryMirror (0.67s)

                                                
                                    
x
+
TestOffline (116.98s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-507329 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2 
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-507329 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2 : (1m55.827990745s)
helpers_test.go:175: Cleaning up "offline-docker-507329" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-507329
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-507329: (1.151435541s)
--- PASS: TestOffline (116.98s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-733032
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-733032: exit status 85 (70.513434ms)

                                                
                                                
-- stdout --
	* Profile "addons-733032" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-733032"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-733032
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-733032: exit status 85 (71.114897ms)

                                                
                                                
-- stdout --
	* Profile "addons-733032" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-733032"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (219.63s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-733032 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-733032 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m39.627520442s)
--- PASS: TestAddons/Setup (219.63s)

                                                
                                    
x
+
TestAddons/serial/Volcano (45.1s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:876: volcano-admission stabilized in 30.707462ms
addons_test.go:868: volcano-scheduler stabilized in 31.397806ms
addons_test.go:884: volcano-controller stabilized in 35.737554ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-799f64f894-5tlwr" [eb675c04-f0f0-4b2a-8196-f4a7a6b8a700] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.049512257s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-589c7dd587-w8p8j" [ccd738f9-8b96-41e9-ba55-34e1de0bb2b1] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.00978892s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-7dc6969b45-748g6" [cc147067-a58a-4598-b140-f77f00b68efa] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 6.004328483s
addons_test.go:903: (dbg) Run:  kubectl --context addons-733032 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-733032 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-733032 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [7a664b8a-b153-49aa-abc7-e555e4616110] Pending
helpers_test.go:352: "test-job-nginx-0" [7a664b8a-b153-49aa-abc7-e555e4616110] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [7a664b8a-b153-49aa-abc7-e555e4616110] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 16.005082251s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-733032 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-733032 addons disable volcano --alsologtostderr -v=1: (11.588681318s)
--- PASS: TestAddons/serial/Volcano (45.10s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-733032 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-733032 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.6s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-733032 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-733032 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [0ffe9439-caca-4390-b3b1-82fa592409f2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [0ffe9439-caca-4390-b3b1-82fa592409f2] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.004540335s
addons_test.go:694: (dbg) Run:  kubectl --context addons-733032 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-733032 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-733032 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.60s)

                                                
                                    
x
+
TestAddons/parallel/Registry (23.07s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 8.663396ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-r97wn" [f25de9cb-c70b-499d-afb6-64e1dc3fc982] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.005575565s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-429vx" [6ba64b02-dc2c-4323-96bb-fd27b2dbc166] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005039731s
addons_test.go:392: (dbg) Run:  kubectl --context addons-733032 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-733032 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-733032 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (12.137978901s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-733032 ip
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-733032 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (23.07s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.69s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 6.330225ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-733032
addons_test.go:332: (dbg) Run:  kubectl --context addons-733032 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-733032 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.69s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (24.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-733032 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-733032 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-733032 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [265a366b-b8b7-4976-b855-6e89945c9eb1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [265a366b-b8b7-4976-b855-6e89945c9eb1] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.012319618s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-733032 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-733032 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-733032 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.186
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-733032 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-733032 addons disable ingress-dns --alsologtostderr -v=1: (1.673337318s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-733032 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-733032 addons disable ingress --alsologtostderr -v=1: (7.847600928s)
--- PASS: TestAddons/parallel/Ingress (24.02s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.19s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-54xwx" [d8807f3e-d946-4ca0-925c-134bcfeab29f] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.00457222s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-733032 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.19s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.5s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 9.197364ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-88nws" [0e6eb8a4-83e6-4f2c-a61f-18aa01be6200] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.016251272s
addons_test.go:463: (dbg) Run:  kubectl --context addons-733032 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-733032 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-733032 addons disable metrics-server --alsologtostderr -v=1: (1.380262603s)
--- PASS: TestAddons/parallel/MetricsServer (7.50s)

                                                
                                    
x
+
TestAddons/parallel/CSI (35.48s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0908 11:05:22.418933  364318 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0908 11:05:22.423477  364318 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0908 11:05:22.423510  364318 kapi.go:107] duration metric: took 4.610437ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 4.622514ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-733032 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-733032 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-733032 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-733032 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-733032 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [3a21a4ca-c96a-40fe-b3e4-a43aa6a4e7d6] Pending
helpers_test.go:352: "task-pv-pod" [3a21a4ca-c96a-40fe-b3e4-a43aa6a4e7d6] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [3a21a4ca-c96a-40fe-b3e4-a43aa6a4e7d6] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.006005686s
addons_test.go:572: (dbg) Run:  kubectl --context addons-733032 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-733032 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:435: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:427: (dbg) Run:  kubectl --context addons-733032 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-733032 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-733032 delete pod task-pv-pod: (1.267376592s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-733032 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-733032 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-733032 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-733032 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-733032 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [1f54cb6f-cf83-4e33-8272-31b78e8fe3b9] Pending
helpers_test.go:352: "task-pv-pod-restore" [1f54cb6f-cf83-4e33-8272-31b78e8fe3b9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [1f54cb6f-cf83-4e33-8272-31b78e8fe3b9] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.00415062s
addons_test.go:614: (dbg) Run:  kubectl --context addons-733032 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-733032 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-733032 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-733032 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-733032 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-733032 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.057827391s)
--- PASS: TestAddons/parallel/CSI (35.48s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (29.77s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-733032 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-733032 --alsologtostderr -v=1: (1.060546636s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6f46646d79-ffmkb" [6204663f-72c1-45b7-a1af-6d2d82fe378b] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6f46646d79-ffmkb" [6204663f-72c1-45b7-a1af-6d2d82fe378b] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 22.009489144s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-733032 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-733032 addons disable headlamp --alsologtostderr -v=1: (6.699172428s)
--- PASS: TestAddons/parallel/Headlamp (29.77s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.52s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-85f6b7fc65-4zrx7" [393925df-6229-459e-8eb8-db2d29e66162] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.006352208s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-733032 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.52s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (64.07s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-733032 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-733032 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-733032 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-733032 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-733032 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-733032 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-733032 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-733032 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-733032 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-733032 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-733032 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-733032 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-733032 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-733032 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-733032 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-733032 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-733032 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-733032 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [11e7a25d-37cb-42aa-9017-22e556c6536b] Pending
2025/09/08 11:05:31 [DEBUG] GET http://192.168.39.186:5000
helpers_test.go:352: "test-local-path" [11e7a25d-37cb-42aa-9017-22e556c6536b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [11e7a25d-37cb-42aa-9017-22e556c6536b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [11e7a25d-37cb-42aa-9017-22e556c6536b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.006534725s
addons_test.go:967: (dbg) Run:  kubectl --context addons-733032 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-733032 ssh "cat /opt/local-path-provisioner/pvc-e157da8a-0c66-403e-af82-e54d9e7479b6_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-733032 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-733032 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-733032 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-733032 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.967295507s)
--- PASS: TestAddons/parallel/LocalPath (64.07s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.8s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-p9nln" [ff4e0d11-331a-4e3c-a267-f770869da39b] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004930283s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-733032 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.80s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.04s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
I0908 11:05:46.050904  364318 kapi.go:150] Service nginx in namespace default found.
helpers_test.go:352: "yakd-dashboard-5ff678cb9-j9xb6" [80cb9e5f-37d6-40e8-aa9a-62b435856201] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.012438546s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-733032 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-733032 addons disable yakd --alsologtostderr -v=1: (6.021533999s)
--- PASS: TestAddons/parallel/Yakd (12.04s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.65s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-733032
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-733032: (12.336605824s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-733032
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-733032
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-733032
--- PASS: TestAddons/StoppedEnableDisable (12.65s)

                                                
                                    
x
+
TestCertOptions (66.97s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-975042 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 
E0908 12:04:04.602103  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/addons-733032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-975042 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (1m5.382250429s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-975042 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-975042 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-975042 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-975042" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-975042
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-975042: (1.055967334s)
--- PASS: TestCertOptions (66.97s)

                                                
                                    
x
+
TestCertExpiration (323.18s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-282968 --memory=3072 --cert-expiration=3m --driver=kvm2 
E0908 12:03:36.588986  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/skaffold-756868/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:03:36.595544  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/skaffold-756868/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:03:36.607124  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/skaffold-756868/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:03:36.628578  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/skaffold-756868/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:03:36.670105  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/skaffold-756868/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:03:36.751902  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/skaffold-756868/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:03:36.914177  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/skaffold-756868/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:03:37.236226  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/skaffold-756868/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:03:37.878321  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/skaffold-756868/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:03:39.160622  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/skaffold-756868/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:03:41.722449  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/skaffold-756868/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:03:46.844528  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/skaffold-756868/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-282968 --memory=3072 --cert-expiration=3m --driver=kvm2 : (1m7.06086138s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-282968 --memory=3072 --cert-expiration=8760h --driver=kvm2 
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-282968 --memory=3072 --cert-expiration=8760h --driver=kvm2 : (1m14.876206857s)
helpers_test.go:175: Cleaning up "cert-expiration-282968" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-282968
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-282968: (1.242843225s)
--- PASS: TestCertExpiration (323.18s)

                                                
                                    
x
+
TestDockerFlags (65.09s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-970384 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-970384 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (1m3.747881717s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-970384 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-970384 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-970384" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-970384
--- PASS: TestDockerFlags (65.09s)

                                                
                                    
x
+
TestForceSystemdFlag (54.53s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-800610 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-800610 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (52.195781331s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-800610 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-800610" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-800610
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-800610: (2.04525513s)
--- PASS: TestForceSystemdFlag (54.53s)

                                                
                                    
x
+
TestForceSystemdEnv (86.25s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-412757 --memory=3072 --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-412757 --memory=3072 --alsologtostderr -v=5 --driver=kvm2 : (1m24.965328789s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-412757 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-412757" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-412757
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-412757: (1.019996432s)
--- PASS: TestForceSystemdEnv (86.25s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (2.3s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0908 11:58:52.820958  364318 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0908 11:58:52.821096  364318 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0908 11:58:52.853319  364318 install.go:62] docker-machine-driver-kvm2: exit status 1
W0908 11:58:52.853515  364318 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0908 11:58:52.853584  364318 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2418602004/001/docker-machine-driver-kvm2
I0908 11:58:53.116466  364318 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2418602004/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440] Decompressors:map[bz2:0xc000678e10 gz:0xc000678e18 tar:0xc000678670 tar.bz2:0xc000678690 tar.gz:0xc0006786e0 tar.xz:0xc000678710 tar.zst:0xc000678df0 tbz2:0xc000678690 tgz:0xc0006786e0 txz:0xc000678710 tzst:0xc000678df0 xz:0xc000678e30 zip:0xc000678e40 zst:0xc000678e38] Getters:map[file:0xc0013497a0 http:0xc0006779f0 https:0xc000677a40] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0908 11:58:53.116516  364318 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2418602004/001/docker-machine-driver-kvm2
I0908 11:58:54.477585  364318 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0908 11:58:54.477680  364318 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0908 11:58:54.512216  364318 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0908 11:58:54.512255  364318 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0908 11:58:54.512316  364318 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0908 11:58:54.512347  364318 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2418602004/002/docker-machine-driver-kvm2
I0908 11:58:54.568759  364318 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2418602004/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440] Decompressors:map[bz2:0xc000678e10 gz:0xc000678e18 tar:0xc000678670 tar.bz2:0xc000678690 tar.gz:0xc0006786e0 tar.xz:0xc000678710 tar.zst:0xc000678df0 tbz2:0xc000678690 tgz:0xc0006786e0 txz:0xc000678710 tzst:0xc000678df0 xz:0xc000678e30 zip:0xc000678e40 zst:0xc000678e38] Getters:map[file:0xc001a4dbc0 http:0xc000136410 https:0xc000136460] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0908 11:58:54.568826  364318 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2418602004/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (2.30s)

                                                
                                    
x
+
TestErrorSpam/setup (50.45s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-329366 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-329366 --driver=kvm2 
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-329366 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-329366 --driver=kvm2 : (50.451415901s)
--- PASS: TestErrorSpam/setup (50.45s)

                                                
                                    
x
+
TestErrorSpam/start (0.4s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-329366 --log_dir /tmp/nospam-329366 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-329366 --log_dir /tmp/nospam-329366 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-329366 --log_dir /tmp/nospam-329366 start --dry-run
--- PASS: TestErrorSpam/start (0.40s)

                                                
                                    
x
+
TestErrorSpam/status (0.85s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-329366 --log_dir /tmp/nospam-329366 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-329366 --log_dir /tmp/nospam-329366 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-329366 --log_dir /tmp/nospam-329366 status
--- PASS: TestErrorSpam/status (0.85s)

                                                
                                    
x
+
TestErrorSpam/pause (1.46s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-329366 --log_dir /tmp/nospam-329366 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-329366 --log_dir /tmp/nospam-329366 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-329366 --log_dir /tmp/nospam-329366 pause
--- PASS: TestErrorSpam/pause (1.46s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.78s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-329366 --log_dir /tmp/nospam-329366 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-329366 --log_dir /tmp/nospam-329366 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-329366 --log_dir /tmp/nospam-329366 unpause
--- PASS: TestErrorSpam/unpause (1.78s)

                                                
                                    
x
+
TestErrorSpam/stop (15.64s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-329366 --log_dir /tmp/nospam-329366 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-329366 --log_dir /tmp/nospam-329366 stop: (12.405344048s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-329366 --log_dir /tmp/nospam-329366 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-329366 --log_dir /tmp/nospam-329366 stop: (1.189515997s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-329366 --log_dir /tmp/nospam-329366 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-329366 --log_dir /tmp/nospam-329366 stop: (2.045361508s)
--- PASS: TestErrorSpam/stop (15.64s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21512-360138/.minikube/files/etc/test/nested/copy/364318/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (89.61s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-799296 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2 
E0908 11:09:04.607028  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/addons-733032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:09:04.613513  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/addons-733032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:09:04.624972  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/addons-733032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:09:04.646489  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/addons-733032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:09:04.688024  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/addons-733032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:09:04.769577  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/addons-733032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:09:04.931146  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/addons-733032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:09:05.252908  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/addons-733032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:09:05.895138  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/addons-733032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:09:07.176882  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/addons-733032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:09:09.739926  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/addons-733032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-799296 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2 : (1m29.613960923s)
--- PASS: TestFunctional/serial/StartWithProxy (89.61s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (56.46s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0908 11:09:14.516968  364318 config.go:182] Loaded profile config "functional-799296": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-799296 --alsologtostderr -v=8
E0908 11:09:14.862011  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/addons-733032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:09:25.104110  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/addons-733032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:09:45.585887  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/addons-733032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-799296 --alsologtostderr -v=8: (56.459962168s)
functional_test.go:678: soft start took 56.460772882s for "functional-799296" cluster.
I0908 11:10:10.977329  364318 config.go:182] Loaded profile config "functional-799296": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/SoftStart (56.46s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-799296 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.56s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.38s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-799296 /tmp/TestFunctionalserialCacheCmdcacheadd_local983571491/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 cache add minikube-local-cache-test:functional-799296
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 cache delete minikube-local-cache-test:functional-799296
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-799296
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.38s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-799296 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (231.643839ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 kubectl -- --context functional-799296 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-799296 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (56.02s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-799296 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0908 11:10:26.547403  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/addons-733032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-799296 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (56.021832434s)
functional_test.go:776: restart took 56.02197077s for "functional-799296" cluster.
I0908 11:11:13.089961  364318 config.go:182] Loaded profile config "functional-799296": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/ExtraConfig (56.02s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-799296 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-799296 logs: (1.072134515s)
--- PASS: TestFunctional/serial/LogsCmd (1.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.1s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 logs --file /tmp/TestFunctionalserialLogsFileCmd22152508/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-799296 logs --file /tmp/TestFunctionalserialLogsFileCmd22152508/001/logs.txt: (1.101024906s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.10s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.22s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-799296 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-799296
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-799296: exit status 115 (304.934025ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL             │
	├───────────┼─────────────┼─────────────┼────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.63:30293 │
	└───────────┴─────────────┴─────────────┴────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-799296 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.22s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-799296 config get cpus: exit status 14 (63.713224ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-799296 config get cpus: exit status 14 (66.187529ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-799296 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-799296 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (152.598347ms)

                                                
                                                
-- stdout --
	* [functional-799296] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21512
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21512-360138/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-360138/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 11:11:34.617339  372601 out.go:360] Setting OutFile to fd 1 ...
	I0908 11:11:34.617597  372601 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:11:34.617613  372601 out.go:374] Setting ErrFile to fd 2...
	I0908 11:11:34.617618  372601 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:11:34.617874  372601 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-360138/.minikube/bin
	I0908 11:11:34.618506  372601 out.go:368] Setting JSON to false
	I0908 11:11:34.619770  372601 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3240,"bootTime":1757326655,"procs":277,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 11:11:34.619841  372601 start.go:140] virtualization: kvm guest
	I0908 11:11:34.622378  372601 out.go:179] * [functional-799296] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0908 11:11:34.624039  372601 out.go:179]   - MINIKUBE_LOCATION=21512
	I0908 11:11:34.624053  372601 notify.go:220] Checking for updates...
	I0908 11:11:34.626607  372601 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 11:11:34.628184  372601 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21512-360138/kubeconfig
	I0908 11:11:34.629361  372601 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-360138/.minikube
	I0908 11:11:34.630607  372601 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 11:11:34.632221  372601 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 11:11:34.634293  372601 config.go:182] Loaded profile config "functional-799296": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 11:11:34.634890  372601 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21512-360138/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:11:34.634992  372601 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:11:34.651748  372601 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40249
	I0908 11:11:34.652379  372601 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:11:34.652969  372601 main.go:141] libmachine: Using API Version  1
	I0908 11:11:34.652992  372601 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:11:34.653374  372601 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:11:34.653563  372601 main.go:141] libmachine: (functional-799296) Calling .DriverName
	I0908 11:11:34.653812  372601 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 11:11:34.654216  372601 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21512-360138/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:11:34.654286  372601 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:11:34.670784  372601 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46827
	I0908 11:11:34.671232  372601 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:11:34.671689  372601 main.go:141] libmachine: Using API Version  1
	I0908 11:11:34.671723  372601 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:11:34.672125  372601 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:11:34.672336  372601 main.go:141] libmachine: (functional-799296) Calling .DriverName
	I0908 11:11:34.709497  372601 out.go:179] * Using the kvm2 driver based on existing profile
	I0908 11:11:34.710866  372601 start.go:304] selected driver: kvm2
	I0908 11:11:34.710886  372601 start.go:918] validating driver "kvm2" against &{Name:functional-799296 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.0 ClusterName:functional-799296 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.63 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 11:11:34.711009  372601 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 11:11:34.713440  372601 out.go:203] 
	W0908 11:11:34.715140  372601 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0908 11:11:34.716550  372601 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-799296 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-799296 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-799296 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (164.314437ms)

                                                
                                                
-- stdout --
	* [functional-799296] minikube v1.36.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21512
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21512-360138/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-360138/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 11:11:31.771991  372227 out.go:360] Setting OutFile to fd 1 ...
	I0908 11:11:31.772419  372227 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:11:31.772433  372227 out.go:374] Setting ErrFile to fd 2...
	I0908 11:11:31.772440  372227 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:11:31.772836  372227 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-360138/.minikube/bin
	I0908 11:11:31.773516  372227 out.go:368] Setting JSON to false
	I0908 11:11:31.774828  372227 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3237,"bootTime":1757326655,"procs":245,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0908 11:11:31.774958  372227 start.go:140] virtualization: kvm guest
	I0908 11:11:31.778370  372227 out.go:179] * [functional-799296] minikube v1.36.0 sur Ubuntu 20.04 (kvm/amd64)
	I0908 11:11:31.780139  372227 out.go:179]   - MINIKUBE_LOCATION=21512
	I0908 11:11:31.780192  372227 notify.go:220] Checking for updates...
	I0908 11:11:31.783150  372227 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 11:11:31.784737  372227 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21512-360138/kubeconfig
	I0908 11:11:31.786401  372227 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-360138/.minikube
	I0908 11:11:31.788019  372227 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0908 11:11:31.789646  372227 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 11:11:31.791593  372227 config.go:182] Loaded profile config "functional-799296": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 11:11:31.792128  372227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21512-360138/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:11:31.792210  372227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:11:31.811491  372227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46349
	I0908 11:11:31.812258  372227 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:11:31.812901  372227 main.go:141] libmachine: Using API Version  1
	I0908 11:11:31.812928  372227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:11:31.813446  372227 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:11:31.813730  372227 main.go:141] libmachine: (functional-799296) Calling .DriverName
	I0908 11:11:31.814088  372227 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 11:11:31.814576  372227 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21512-360138/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:11:31.814663  372227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:11:31.832863  372227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43759
	I0908 11:11:31.833452  372227 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:11:31.833950  372227 main.go:141] libmachine: Using API Version  1
	I0908 11:11:31.833975  372227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:11:31.834298  372227 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:11:31.834512  372227 main.go:141] libmachine: (functional-799296) Calling .DriverName
	I0908 11:11:31.873229  372227 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I0908 11:11:31.874772  372227 start.go:304] selected driver: kvm2
	I0908 11:11:31.874796  372227 start.go:918] validating driver "kvm2" against &{Name:functional-799296 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21488/minikube-v1.36.0-1756980912-21488-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.0 ClusterName:functional-799296 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.63 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 11:11:31.874927  372227 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 11:11:31.877547  372227 out.go:203] 
	W0908 11:11:31.879237  372227 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0908 11:11:31.880621  372227 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-799296 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-799296 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-z44vz" [b65092e9-f62e-40d8-b880-c03fbe181f67] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-z44vz" [b65092e9-f62e-40d8-b880-c03fbe181f67] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.005311696s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.63:31966
functional_test.go:1680: http://192.168.39.63:31966: success! body:
Request served by hello-node-connect-7d85dfc575-z44vz

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.63:31966
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.52s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 ssh -n functional-799296 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 cp functional-799296:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2557956061/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 ssh -n functional-799296 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 ssh -n functional-799296 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/364318/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 ssh "sudo cat /etc/test/nested/copy/364318/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/364318.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 ssh "sudo cat /etc/ssl/certs/364318.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/364318.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 ssh "sudo cat /usr/share/ca-certificates/364318.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3643182.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 ssh "sudo cat /etc/ssl/certs/3643182.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/3643182.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 ssh "sudo cat /usr/share/ca-certificates/3643182.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-799296 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-799296 ssh "sudo systemctl is-active crio": exit status 1 (240.155163ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-799296 /tmp/TestFunctionalparallelMountCmdany-port1539350560/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1757329879797954831" to /tmp/TestFunctionalparallelMountCmdany-port1539350560/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1757329879797954831" to /tmp/TestFunctionalparallelMountCmdany-port1539350560/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1757329879797954831" to /tmp/TestFunctionalparallelMountCmdany-port1539350560/001/test-1757329879797954831
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-799296 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (245.909388ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0908 11:11:20.044261  364318 retry.go:31] will retry after 498.33802ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep  8 11:11 created-by-test
-rw-r--r-- 1 docker docker 24 Sep  8 11:11 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep  8 11:11 test-1757329879797954831
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 ssh cat /mount-9p/test-1757329879797954831
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-799296 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [781a38fa-0001-42cb-b76b-67a54ddfe15b] Pending
helpers_test.go:352: "busybox-mount" [781a38fa-0001-42cb-b76b-67a54ddfe15b] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [781a38fa-0001-42cb-b76b-67a54ddfe15b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [781a38fa-0001-42cb-b76b-67a54ddfe15b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.00662605s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-799296 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-799296 /tmp/TestFunctionalparallelMountCmdany-port1539350560/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-799296 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.0
registry.k8s.io/kube-proxy:v1.34.0
registry.k8s.io/kube-controller-manager:v1.34.0
registry.k8s.io/kube-apiserver:v1.34.0
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-799296
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-799296
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-799296 image ls --format short --alsologtostderr:
I0908 11:11:38.939690  372949 out.go:360] Setting OutFile to fd 1 ...
I0908 11:11:38.940028  372949 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 11:11:38.940044  372949 out.go:374] Setting ErrFile to fd 2...
I0908 11:11:38.940050  372949 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 11:11:38.940294  372949 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-360138/.minikube/bin
I0908 11:11:38.940945  372949 config.go:182] Loaded profile config "functional-799296": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0908 11:11:38.941044  372949 config.go:182] Loaded profile config "functional-799296": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0908 11:11:38.941407  372949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21512-360138/.minikube/bin/docker-machine-driver-kvm2
I0908 11:11:38.941480  372949 main.go:141] libmachine: Launching plugin server for driver kvm2
I0908 11:11:38.958047  372949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42325
I0908 11:11:38.958617  372949 main.go:141] libmachine: () Calling .GetVersion
I0908 11:11:38.959321  372949 main.go:141] libmachine: Using API Version  1
I0908 11:11:38.959350  372949 main.go:141] libmachine: () Calling .SetConfigRaw
I0908 11:11:38.959772  372949 main.go:141] libmachine: () Calling .GetMachineName
I0908 11:11:38.960078  372949 main.go:141] libmachine: (functional-799296) Calling .GetState
I0908 11:11:38.962378  372949 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21512-360138/.minikube/bin/docker-machine-driver-kvm2
I0908 11:11:38.962431  372949 main.go:141] libmachine: Launching plugin server for driver kvm2
I0908 11:11:38.979660  372949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33247
I0908 11:11:38.980386  372949 main.go:141] libmachine: () Calling .GetVersion
I0908 11:11:38.981125  372949 main.go:141] libmachine: Using API Version  1
I0908 11:11:38.981160  372949 main.go:141] libmachine: () Calling .SetConfigRaw
I0908 11:11:38.981753  372949 main.go:141] libmachine: () Calling .GetMachineName
I0908 11:11:38.982013  372949 main.go:141] libmachine: (functional-799296) Calling .DriverName
I0908 11:11:38.982311  372949 ssh_runner.go:195] Run: systemctl --version
I0908 11:11:38.982343  372949 main.go:141] libmachine: (functional-799296) Calling .GetSSHHostname
I0908 11:11:38.986216  372949 main.go:141] libmachine: (functional-799296) DBG | domain functional-799296 has defined MAC address 52:54:00:58:fa:55 in network mk-functional-799296
I0908 11:11:38.986686  372949 main.go:141] libmachine: (functional-799296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:fa:55", ip: ""} in network mk-functional-799296: {Iface:virbr1 ExpiryTime:2025-09-08 12:08:00 +0000 UTC Type:0 Mac:52:54:00:58:fa:55 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:functional-799296 Clientid:01:52:54:00:58:fa:55}
I0908 11:11:38.986721  372949 main.go:141] libmachine: (functional-799296) DBG | domain functional-799296 has defined IP address 192.168.39.63 and MAC address 52:54:00:58:fa:55 in network mk-functional-799296
I0908 11:11:38.986956  372949 main.go:141] libmachine: (functional-799296) Calling .GetSSHPort
I0908 11:11:38.987193  372949 main.go:141] libmachine: (functional-799296) Calling .GetSSHKeyPath
I0908 11:11:38.987394  372949 main.go:141] libmachine: (functional-799296) Calling .GetSSHUsername
I0908 11:11:38.987606  372949 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21512-360138/.minikube/machines/functional-799296/id_rsa Username:docker}
I0908 11:11:39.079242  372949 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0908 11:11:39.107693  372949 main.go:141] libmachine: Making call to close driver server
I0908 11:11:39.107711  372949 main.go:141] libmachine: (functional-799296) Calling .Close
I0908 11:11:39.108030  372949 main.go:141] libmachine: Successfully made call to close driver server
I0908 11:11:39.108048  372949 main.go:141] libmachine: Making call to close connection to plugin binary
I0908 11:11:39.108056  372949 main.go:141] libmachine: Making call to close driver server
I0908 11:11:39.108062  372949 main.go:141] libmachine: (functional-799296) Calling .Close
I0908 11:11:39.108390  372949 main.go:141] libmachine: Successfully made call to close driver server
I0908 11:11:39.108415  372949 main.go:141] libmachine: (functional-799296) DBG | Closing plugin on server side
I0908 11:11:39.108460  372949 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-799296 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ localhost/my-image                          │ functional-799296 │ 4f4d561f58dd5 │ 1.24MB │
│ docker.io/library/minikube-local-cache-test │ functional-799296 │ eecdfd85d15fc │ 30B    │
│ registry.k8s.io/kube-proxy                  │ v1.34.0           │ df0860106674d │ 71.9MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc      │ 56cc512116c8f │ 4.4MB  │
│ registry.k8s.io/pause                       │ 3.1               │ da86e6ba6ca19 │ 742kB  │
│ registry.k8s.io/kube-controller-manager     │ v1.34.0           │ a0af72f2ec6d6 │ 74.9MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.0           │ 46169d968e920 │ 52.8MB │
│ registry.k8s.io/pause                       │ 3.3               │ 0184c1613d929 │ 683kB  │
│ registry.k8s.io/pause                       │ latest            │ 350b164e7ae1d │ 240kB  │
│ registry.k8s.io/coredns/coredns             │ v1.12.1           │ 52546a367cc9e │ 75MB   │
│ registry.k8s.io/kube-apiserver              │ v1.34.0           │ 90550c43ad2bc │ 88MB   │
│ registry.k8s.io/etcd                        │ 3.6.4-0           │ 5f1f5298c888d │ 195MB  │
│ registry.k8s.io/pause                       │ 3.10.1            │ cd073f4c5f6a8 │ 736kB  │
│ docker.io/kicbase/echo-server               │ functional-799296 │ 9056ab77afb8e │ 4.94MB │
│ docker.io/kicbase/echo-server               │ latest            │ 9056ab77afb8e │ 4.94MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                │ 6e38f40d628db │ 31.5MB │
└─────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-799296 image ls --format table --alsologtostderr:
I0908 11:11:43.885959  373115 out.go:360] Setting OutFile to fd 1 ...
I0908 11:11:43.886243  373115 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 11:11:43.886253  373115 out.go:374] Setting ErrFile to fd 2...
I0908 11:11:43.886257  373115 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 11:11:43.886541  373115 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-360138/.minikube/bin
I0908 11:11:43.887321  373115 config.go:182] Loaded profile config "functional-799296": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0908 11:11:43.887450  373115 config.go:182] Loaded profile config "functional-799296": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0908 11:11:43.887862  373115 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21512-360138/.minikube/bin/docker-machine-driver-kvm2
I0908 11:11:43.887926  373115 main.go:141] libmachine: Launching plugin server for driver kvm2
I0908 11:11:43.905187  373115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33017
I0908 11:11:43.905782  373115 main.go:141] libmachine: () Calling .GetVersion
I0908 11:11:43.906353  373115 main.go:141] libmachine: Using API Version  1
I0908 11:11:43.906378  373115 main.go:141] libmachine: () Calling .SetConfigRaw
I0908 11:11:43.906819  373115 main.go:141] libmachine: () Calling .GetMachineName
I0908 11:11:43.907120  373115 main.go:141] libmachine: (functional-799296) Calling .GetState
I0908 11:11:43.909279  373115 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21512-360138/.minikube/bin/docker-machine-driver-kvm2
I0908 11:11:43.909326  373115 main.go:141] libmachine: Launching plugin server for driver kvm2
I0908 11:11:43.926283  373115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40201
I0908 11:11:43.926922  373115 main.go:141] libmachine: () Calling .GetVersion
I0908 11:11:43.927431  373115 main.go:141] libmachine: Using API Version  1
I0908 11:11:43.927463  373115 main.go:141] libmachine: () Calling .SetConfigRaw
I0908 11:11:43.927887  373115 main.go:141] libmachine: () Calling .GetMachineName
I0908 11:11:43.928088  373115 main.go:141] libmachine: (functional-799296) Calling .DriverName
I0908 11:11:43.928311  373115 ssh_runner.go:195] Run: systemctl --version
I0908 11:11:43.928335  373115 main.go:141] libmachine: (functional-799296) Calling .GetSSHHostname
I0908 11:11:43.931319  373115 main.go:141] libmachine: (functional-799296) DBG | domain functional-799296 has defined MAC address 52:54:00:58:fa:55 in network mk-functional-799296
I0908 11:11:43.931716  373115 main.go:141] libmachine: (functional-799296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:fa:55", ip: ""} in network mk-functional-799296: {Iface:virbr1 ExpiryTime:2025-09-08 12:08:00 +0000 UTC Type:0 Mac:52:54:00:58:fa:55 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:functional-799296 Clientid:01:52:54:00:58:fa:55}
I0908 11:11:43.931739  373115 main.go:141] libmachine: (functional-799296) DBG | domain functional-799296 has defined IP address 192.168.39.63 and MAC address 52:54:00:58:fa:55 in network mk-functional-799296
I0908 11:11:43.931998  373115 main.go:141] libmachine: (functional-799296) Calling .GetSSHPort
I0908 11:11:43.932211  373115 main.go:141] libmachine: (functional-799296) Calling .GetSSHKeyPath
I0908 11:11:43.932378  373115 main.go:141] libmachine: (functional-799296) Calling .GetSSHUsername
I0908 11:11:43.932533  373115 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21512-360138/.minikube/machines/functional-799296/id_rsa Username:docker}
I0908 11:11:44.015380  373115 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0908 11:11:44.054517  373115 main.go:141] libmachine: Making call to close driver server
I0908 11:11:44.054545  373115 main.go:141] libmachine: (functional-799296) Calling .Close
I0908 11:11:44.055008  373115 main.go:141] libmachine: Successfully made call to close driver server
I0908 11:11:44.055028  373115 main.go:141] libmachine: Making call to close connection to plugin binary
I0908 11:11:44.055038  373115 main.go:141] libmachine: Making call to close driver server
I0908 11:11:44.055044  373115 main.go:141] libmachine: (functional-799296) Calling .Close
I0908 11:11:44.055007  373115 main.go:141] libmachine: (functional-799296) DBG | Closing plugin on server side
I0908 11:11:44.055354  373115 main.go:141] libmachine: (functional-799296) DBG | Closing plugin on server side
I0908 11:11:44.055435  373115 main.go:141] libmachine: Successfully made call to close driver server
I0908 11:11:44.055475  373115 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-799296 image ls --format json --alsologtostderr:
[{"id":"46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.0"],"size":"52800000"},{"id":"df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.34.0"],"size":"71900000"},{"id":"a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.0"],"size":"74900000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cd
b5f5d5563dac0115","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195000000"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"736000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-799296","docker.io/kicbase/echo-server:latest"],"size":"4940000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"eecdfd85d15fc587fc55f5b5845b775881e08ab8796230a2515760e1cd778839","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-799296"],"size":"30"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"75000000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c"
,"repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"4f4d561f58dd52dfe163ba38088de431f82f5df9539a414b45138018c3b21574","repoDigests":[],"repoTags":["localhost/my-image:functional-799296"],"size":"1240000"},{"id":"90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.0"],"size":"88000000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-799296 image ls --format json --alsologtostderr:
I0908 11:11:43.677955  373091 out.go:360] Setting OutFile to fd 1 ...
I0908 11:11:43.678189  373091 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 11:11:43.678199  373091 out.go:374] Setting ErrFile to fd 2...
I0908 11:11:43.678204  373091 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 11:11:43.678431  373091 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-360138/.minikube/bin
I0908 11:11:43.679104  373091 config.go:182] Loaded profile config "functional-799296": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0908 11:11:43.679202  373091 config.go:182] Loaded profile config "functional-799296": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0908 11:11:43.679574  373091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21512-360138/.minikube/bin/docker-machine-driver-kvm2
I0908 11:11:43.679630  373091 main.go:141] libmachine: Launching plugin server for driver kvm2
I0908 11:11:43.696056  373091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37871
I0908 11:11:43.696564  373091 main.go:141] libmachine: () Calling .GetVersion
I0908 11:11:43.697250  373091 main.go:141] libmachine: Using API Version  1
I0908 11:11:43.697282  373091 main.go:141] libmachine: () Calling .SetConfigRaw
I0908 11:11:43.697676  373091 main.go:141] libmachine: () Calling .GetMachineName
I0908 11:11:43.697941  373091 main.go:141] libmachine: (functional-799296) Calling .GetState
I0908 11:11:43.700153  373091 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21512-360138/.minikube/bin/docker-machine-driver-kvm2
I0908 11:11:43.700214  373091 main.go:141] libmachine: Launching plugin server for driver kvm2
I0908 11:11:43.716777  373091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35291
I0908 11:11:43.717278  373091 main.go:141] libmachine: () Calling .GetVersion
I0908 11:11:43.717743  373091 main.go:141] libmachine: Using API Version  1
I0908 11:11:43.717771  373091 main.go:141] libmachine: () Calling .SetConfigRaw
I0908 11:11:43.718154  373091 main.go:141] libmachine: () Calling .GetMachineName
I0908 11:11:43.718368  373091 main.go:141] libmachine: (functional-799296) Calling .DriverName
I0908 11:11:43.718596  373091 ssh_runner.go:195] Run: systemctl --version
I0908 11:11:43.718621  373091 main.go:141] libmachine: (functional-799296) Calling .GetSSHHostname
I0908 11:11:43.721554  373091 main.go:141] libmachine: (functional-799296) DBG | domain functional-799296 has defined MAC address 52:54:00:58:fa:55 in network mk-functional-799296
I0908 11:11:43.721942  373091 main.go:141] libmachine: (functional-799296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:fa:55", ip: ""} in network mk-functional-799296: {Iface:virbr1 ExpiryTime:2025-09-08 12:08:00 +0000 UTC Type:0 Mac:52:54:00:58:fa:55 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:functional-799296 Clientid:01:52:54:00:58:fa:55}
I0908 11:11:43.721982  373091 main.go:141] libmachine: (functional-799296) DBG | domain functional-799296 has defined IP address 192.168.39.63 and MAC address 52:54:00:58:fa:55 in network mk-functional-799296
I0908 11:11:43.722100  373091 main.go:141] libmachine: (functional-799296) Calling .GetSSHPort
I0908 11:11:43.722274  373091 main.go:141] libmachine: (functional-799296) Calling .GetSSHKeyPath
I0908 11:11:43.722423  373091 main.go:141] libmachine: (functional-799296) Calling .GetSSHUsername
I0908 11:11:43.722525  373091 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21512-360138/.minikube/machines/functional-799296/id_rsa Username:docker}
I0908 11:11:43.802136  373091 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0908 11:11:43.825199  373091 main.go:141] libmachine: Making call to close driver server
I0908 11:11:43.825212  373091 main.go:141] libmachine: (functional-799296) Calling .Close
I0908 11:11:43.825528  373091 main.go:141] libmachine: Successfully made call to close driver server
I0908 11:11:43.825593  373091 main.go:141] libmachine: Making call to close connection to plugin binary
I0908 11:11:43.825597  373091 main.go:141] libmachine: (functional-799296) DBG | Closing plugin on server side
I0908 11:11:43.825608  373091 main.go:141] libmachine: Making call to close driver server
I0908 11:11:43.825619  373091 main.go:141] libmachine: (functional-799296) Calling .Close
I0908 11:11:43.825903  373091 main.go:141] libmachine: (functional-799296) DBG | Closing plugin on server side
I0908 11:11:43.825942  373091 main.go:141] libmachine: Successfully made call to close driver server
I0908 11:11:43.825987  373091 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-799296 image ls --format yaml --alsologtostderr:
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "736000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.0
size: "88000000"
- id: 46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.0
size: "52800000"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195000000"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "75000000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-799296
- docker.io/kicbase/echo-server:latest
size: "4940000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: eecdfd85d15fc587fc55f5b5845b775881e08ab8796230a2515760e1cd778839
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-799296
size: "30"
- id: a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.0
size: "74900000"
- id: df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.34.0
size: "71900000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-799296 image ls --format yaml --alsologtostderr:
I0908 11:11:39.164024  372973 out.go:360] Setting OutFile to fd 1 ...
I0908 11:11:39.164262  372973 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 11:11:39.164270  372973 out.go:374] Setting ErrFile to fd 2...
I0908 11:11:39.164274  372973 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 11:11:39.164461  372973 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-360138/.minikube/bin
I0908 11:11:39.165061  372973 config.go:182] Loaded profile config "functional-799296": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0908 11:11:39.165169  372973 config.go:182] Loaded profile config "functional-799296": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0908 11:11:39.165539  372973 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21512-360138/.minikube/bin/docker-machine-driver-kvm2
I0908 11:11:39.165614  372973 main.go:141] libmachine: Launching plugin server for driver kvm2
I0908 11:11:39.182541  372973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36755
I0908 11:11:39.183230  372973 main.go:141] libmachine: () Calling .GetVersion
I0908 11:11:39.183827  372973 main.go:141] libmachine: Using API Version  1
I0908 11:11:39.183866  372973 main.go:141] libmachine: () Calling .SetConfigRaw
I0908 11:11:39.184252  372973 main.go:141] libmachine: () Calling .GetMachineName
I0908 11:11:39.184470  372973 main.go:141] libmachine: (functional-799296) Calling .GetState
I0908 11:11:39.186482  372973 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21512-360138/.minikube/bin/docker-machine-driver-kvm2
I0908 11:11:39.186544  372973 main.go:141] libmachine: Launching plugin server for driver kvm2
I0908 11:11:39.204454  372973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38263
I0908 11:11:39.205095  372973 main.go:141] libmachine: () Calling .GetVersion
I0908 11:11:39.205692  372973 main.go:141] libmachine: Using API Version  1
I0908 11:11:39.205717  372973 main.go:141] libmachine: () Calling .SetConfigRaw
I0908 11:11:39.206171  372973 main.go:141] libmachine: () Calling .GetMachineName
I0908 11:11:39.206401  372973 main.go:141] libmachine: (functional-799296) Calling .DriverName
I0908 11:11:39.206740  372973 ssh_runner.go:195] Run: systemctl --version
I0908 11:11:39.206783  372973 main.go:141] libmachine: (functional-799296) Calling .GetSSHHostname
I0908 11:11:39.210262  372973 main.go:141] libmachine: (functional-799296) DBG | domain functional-799296 has defined MAC address 52:54:00:58:fa:55 in network mk-functional-799296
I0908 11:11:39.210806  372973 main.go:141] libmachine: (functional-799296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:fa:55", ip: ""} in network mk-functional-799296: {Iface:virbr1 ExpiryTime:2025-09-08 12:08:00 +0000 UTC Type:0 Mac:52:54:00:58:fa:55 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:functional-799296 Clientid:01:52:54:00:58:fa:55}
I0908 11:11:39.210850  372973 main.go:141] libmachine: (functional-799296) DBG | domain functional-799296 has defined IP address 192.168.39.63 and MAC address 52:54:00:58:fa:55 in network mk-functional-799296
I0908 11:11:39.211050  372973 main.go:141] libmachine: (functional-799296) Calling .GetSSHPort
I0908 11:11:39.211256  372973 main.go:141] libmachine: (functional-799296) Calling .GetSSHKeyPath
I0908 11:11:39.211413  372973 main.go:141] libmachine: (functional-799296) Calling .GetSSHUsername
I0908 11:11:39.211532  372973 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21512-360138/.minikube/machines/functional-799296/id_rsa Username:docker}
I0908 11:11:39.289925  372973 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0908 11:11:39.313359  372973 main.go:141] libmachine: Making call to close driver server
I0908 11:11:39.313390  372973 main.go:141] libmachine: (functional-799296) Calling .Close
I0908 11:11:39.313726  372973 main.go:141] libmachine: Successfully made call to close driver server
I0908 11:11:39.313745  372973 main.go:141] libmachine: Making call to close connection to plugin binary
I0908 11:11:39.313757  372973 main.go:141] libmachine: Making call to close driver server
I0908 11:11:39.313765  372973 main.go:141] libmachine: (functional-799296) Calling .Close
I0908 11:11:39.314056  372973 main.go:141] libmachine: Successfully made call to close driver server
I0908 11:11:39.314080  372973 main.go:141] libmachine: Making call to close connection to plugin binary
I0908 11:11:39.314081  372973 main.go:141] libmachine: (functional-799296) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-799296 ssh pgrep buildkitd: exit status 1 (205.594549ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 image build -t localhost/my-image:functional-799296 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-799296 image build -t localhost/my-image:functional-799296 testdata/build --alsologtostderr: (3.883210051s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-799296 image build -t localhost/my-image:functional-799296 testdata/build --alsologtostderr:
I0908 11:11:39.577621  373027 out.go:360] Setting OutFile to fd 1 ...
I0908 11:11:39.577915  373027 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 11:11:39.577927  373027 out.go:374] Setting ErrFile to fd 2...
I0908 11:11:39.577931  373027 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 11:11:39.578166  373027 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-360138/.minikube/bin
I0908 11:11:39.578900  373027 config.go:182] Loaded profile config "functional-799296": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0908 11:11:39.579780  373027 config.go:182] Loaded profile config "functional-799296": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0908 11:11:39.580178  373027 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21512-360138/.minikube/bin/docker-machine-driver-kvm2
I0908 11:11:39.580222  373027 main.go:141] libmachine: Launching plugin server for driver kvm2
I0908 11:11:39.597164  373027 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44871
I0908 11:11:39.597660  373027 main.go:141] libmachine: () Calling .GetVersion
I0908 11:11:39.598220  373027 main.go:141] libmachine: Using API Version  1
I0908 11:11:39.598240  373027 main.go:141] libmachine: () Calling .SetConfigRaw
I0908 11:11:39.598707  373027 main.go:141] libmachine: () Calling .GetMachineName
I0908 11:11:39.598930  373027 main.go:141] libmachine: (functional-799296) Calling .GetState
I0908 11:11:39.600905  373027 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21512-360138/.minikube/bin/docker-machine-driver-kvm2
I0908 11:11:39.600968  373027 main.go:141] libmachine: Launching plugin server for driver kvm2
I0908 11:11:39.617592  373027 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39287
I0908 11:11:39.618191  373027 main.go:141] libmachine: () Calling .GetVersion
I0908 11:11:39.618824  373027 main.go:141] libmachine: Using API Version  1
I0908 11:11:39.618856  373027 main.go:141] libmachine: () Calling .SetConfigRaw
I0908 11:11:39.619242  373027 main.go:141] libmachine: () Calling .GetMachineName
I0908 11:11:39.619426  373027 main.go:141] libmachine: (functional-799296) Calling .DriverName
I0908 11:11:39.619670  373027 ssh_runner.go:195] Run: systemctl --version
I0908 11:11:39.619697  373027 main.go:141] libmachine: (functional-799296) Calling .GetSSHHostname
I0908 11:11:39.622813  373027 main.go:141] libmachine: (functional-799296) DBG | domain functional-799296 has defined MAC address 52:54:00:58:fa:55 in network mk-functional-799296
I0908 11:11:39.623170  373027 main.go:141] libmachine: (functional-799296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:fa:55", ip: ""} in network mk-functional-799296: {Iface:virbr1 ExpiryTime:2025-09-08 12:08:00 +0000 UTC Type:0 Mac:52:54:00:58:fa:55 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:functional-799296 Clientid:01:52:54:00:58:fa:55}
I0908 11:11:39.623199  373027 main.go:141] libmachine: (functional-799296) DBG | domain functional-799296 has defined IP address 192.168.39.63 and MAC address 52:54:00:58:fa:55 in network mk-functional-799296
I0908 11:11:39.623382  373027 main.go:141] libmachine: (functional-799296) Calling .GetSSHPort
I0908 11:11:39.623577  373027 main.go:141] libmachine: (functional-799296) Calling .GetSSHKeyPath
I0908 11:11:39.623730  373027 main.go:141] libmachine: (functional-799296) Calling .GetSSHUsername
I0908 11:11:39.623910  373027 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21512-360138/.minikube/machines/functional-799296/id_rsa Username:docker}
I0908 11:11:39.709986  373027 build_images.go:161] Building image from path: /tmp/build.3198890484.tar
I0908 11:11:39.710077  373027 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0908 11:11:39.725235  373027 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3198890484.tar
I0908 11:11:39.730784  373027 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3198890484.tar: stat -c "%s %y" /var/lib/minikube/build/build.3198890484.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3198890484.tar': No such file or directory
I0908 11:11:39.730828  373027 ssh_runner.go:362] scp /tmp/build.3198890484.tar --> /var/lib/minikube/build/build.3198890484.tar (3072 bytes)
I0908 11:11:39.768843  373027 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3198890484
I0908 11:11:39.783139  373027 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3198890484 -xf /var/lib/minikube/build/build.3198890484.tar
I0908 11:11:39.795496  373027 docker.go:361] Building image: /var/lib/minikube/build/build.3198890484
I0908 11:11:39.795575  373027 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-799296 /var/lib/minikube/build/build.3198890484
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 2.1s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.4s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:4f4d561f58dd52dfe163ba38088de431f82f5df9539a414b45138018c3b21574
#8 writing image sha256:4f4d561f58dd52dfe163ba38088de431f82f5df9539a414b45138018c3b21574 done
#8 naming to localhost/my-image:functional-799296 done
#8 DONE 0.1s
I0908 11:11:43.372890  373027 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-799296 /var/lib/minikube/build/build.3198890484: (3.57727732s)
I0908 11:11:43.372972  373027 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3198890484
I0908 11:11:43.389732  373027 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3198890484.tar
I0908 11:11:43.402350  373027 build_images.go:217] Built localhost/my-image:functional-799296 from /tmp/build.3198890484.tar
I0908 11:11:43.402390  373027 build_images.go:133] succeeded building to: functional-799296
I0908 11:11:43.402395  373027 build_images.go:134] failed building to: 
I0908 11:11:43.402421  373027 main.go:141] libmachine: Making call to close driver server
I0908 11:11:43.402431  373027 main.go:141] libmachine: (functional-799296) Calling .Close
I0908 11:11:43.402806  373027 main.go:141] libmachine: Successfully made call to close driver server
I0908 11:11:43.402829  373027 main.go:141] libmachine: Making call to close connection to plugin binary
I0908 11:11:43.402839  373027 main.go:141] libmachine: Making call to close driver server
I0908 11:11:43.402847  373027 main.go:141] libmachine: (functional-799296) Calling .Close
I0908 11:11:43.403192  373027 main.go:141] libmachine: (functional-799296) DBG | Closing plugin on server side
I0908 11:11:43.403259  373027 main.go:141] libmachine: Successfully made call to close driver server
I0908 11:11:43.403281  373027 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.547699039s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-799296
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 image load --daemon kicbase/echo-server:functional-799296 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 image load --daemon kicbase/echo-server:functional-799296 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-799296
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 image load --daemon kicbase/echo-server:functional-799296 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 image save kicbase/echo-server:functional-799296 /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 image rm kicbase/echo-server:functional-799296 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 image load /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-799296
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 image save --daemon kicbase/echo-server:functional-799296 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-799296
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:514: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-799296 docker-env) && out/minikube-linux-amd64 status -p functional-799296"
functional_test.go:537: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-799296 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-799296 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-799296 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-hpxnn" [a373b740-06a8-4d0f-bc78-7a5fac183219] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-hpxnn" [a373b740-06a8-4d0f-bc78-7a5fac183219] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.005918595s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.20s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-799296 /tmp/TestFunctionalparallelMountCmdspecific-port3233740787/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-799296 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (245.759146ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0908 11:11:28.694592  364318 retry.go:31] will retry after 342.988325ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-799296 /tmp/TestFunctionalparallelMountCmdspecific-port3233740787/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-799296 ssh "sudo umount -f /mount-9p": exit status 1 (235.570758ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-799296 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-799296 /tmp/TestFunctionalparallelMountCmdspecific-port3233740787/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "379.764678ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "57.25664ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-799296 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4283019861/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-799296 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4283019861/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-799296 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4283019861/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-799296 ssh "findmnt -T" /mount1: exit status 1 (278.91493ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0908 11:11:30.431411  364318 retry.go:31] will retry after 354.289094ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-799296 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-799296 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4283019861/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-799296 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4283019861/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-799296 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4283019861/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "309.088937ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "56.96055ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-799296 service list -o json: (1.304706345s)
functional_test.go:1504: Took "1.304828311s" to run "out/minikube-linux-amd64 -p functional-799296 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.63:32181
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.63:32181
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-799296 version -o=json --components
E0908 11:11:48.468773  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/addons-733032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:14:04.602767  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/addons-733032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:14:32.310696  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/addons-733032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/Version/components (0.47s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-799296
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-799296
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-799296
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestGvisorAddon (242.89s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-545908 --memory=3072 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-545908 --memory=3072 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m38.834743543s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-545908 cache add gcr.io/k8s-minikube/gvisor-addon:2
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-545908 cache add gcr.io/k8s-minikube/gvisor-addon:2: (3.832550279s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-545908 addons enable gvisor
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-545908 addons enable gvisor: (4.298278645s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:352: "gvisor" [f5309b48-660c-44a2-bafc-8f54230c337f] Running
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.00544564s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-545908 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:352: "nginx-gvisor" [0357f578-eb01-49d2-8b92-1cf455c7dafa] Pending
helpers_test.go:352: "nginx-gvisor" [0357f578-eb01-49d2-8b92-1cf455c7dafa] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-gvisor" [0357f578-eb01-49d2-8b92-1cf455c7dafa] Running
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 56.003688513s
gvisor_addon_test.go:83: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-545908
gvisor_addon_test.go:83: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-545908: (7.359154574s)
gvisor_addon_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-545908 --memory=3072 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-545908 --memory=3072 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (52.931173208s)
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:352: "gvisor" [f5309b48-660c-44a2-bafc-8f54230c337f] Running
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.005191224s
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:352: "nginx-gvisor" [0357f578-eb01-49d2-8b92-1cf455c7dafa] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.005343075s
helpers_test.go:175: Cleaning up "gvisor-545908" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-545908
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p gvisor-545908: (1.871421702s)
--- PASS: TestGvisorAddon (242.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (242s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2 
E0908 11:24:04.602723  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/addons-733032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:25:27.672566  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/addons-733032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-140960 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2 : (4m1.248317312s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (242.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-140960 kubectl -- rollout status deployment/busybox: (4.088743213s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 kubectl -- exec busybox-7b57f96db7-6576p -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 kubectl -- exec busybox-7b57f96db7-jcslw -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 kubectl -- exec busybox-7b57f96db7-sc5lc -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 kubectl -- exec busybox-7b57f96db7-6576p -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 kubectl -- exec busybox-7b57f96db7-jcslw -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 kubectl -- exec busybox-7b57f96db7-sc5lc -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 kubectl -- exec busybox-7b57f96db7-6576p -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 kubectl -- exec busybox-7b57f96db7-jcslw -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 kubectl -- exec busybox-7b57f96db7-sc5lc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 kubectl -- exec busybox-7b57f96db7-6576p -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 kubectl -- exec busybox-7b57f96db7-6576p -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 kubectl -- exec busybox-7b57f96db7-jcslw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 kubectl -- exec busybox-7b57f96db7-jcslw -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 kubectl -- exec busybox-7b57f96db7-sc5lc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 kubectl -- exec busybox-7b57f96db7-sc5lc -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (55.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 node add --alsologtostderr -v 5
E0908 11:26:19.555682  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/functional-799296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:26:19.562229  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/functional-799296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:26:19.573756  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/functional-799296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:26:19.595268  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/functional-799296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:26:19.636794  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/functional-799296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:26:19.718406  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/functional-799296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:26:19.880307  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/functional-799296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:26:20.201981  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/functional-799296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:26:20.843921  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/functional-799296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:26:22.125629  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/functional-799296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:26:24.687195  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/functional-799296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:26:29.808928  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/functional-799296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-140960 node add --alsologtostderr -v 5: (54.588431189s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 status --alsologtostderr -v 5
E0908 11:26:40.050423  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/functional-799296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (55.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-140960 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (14.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 cp testdata/cp-test.txt ha-140960:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 ssh -n ha-140960 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 cp ha-140960:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3293370621/001/cp-test_ha-140960.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 ssh -n ha-140960 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 cp ha-140960:/home/docker/cp-test.txt ha-140960-m02:/home/docker/cp-test_ha-140960_ha-140960-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 ssh -n ha-140960 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 ssh -n ha-140960-m02 "sudo cat /home/docker/cp-test_ha-140960_ha-140960-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 cp ha-140960:/home/docker/cp-test.txt ha-140960-m03:/home/docker/cp-test_ha-140960_ha-140960-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 ssh -n ha-140960 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 ssh -n ha-140960-m03 "sudo cat /home/docker/cp-test_ha-140960_ha-140960-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 cp ha-140960:/home/docker/cp-test.txt ha-140960-m04:/home/docker/cp-test_ha-140960_ha-140960-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 ssh -n ha-140960 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 ssh -n ha-140960-m04 "sudo cat /home/docker/cp-test_ha-140960_ha-140960-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 cp testdata/cp-test.txt ha-140960-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 ssh -n ha-140960-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 cp ha-140960-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3293370621/001/cp-test_ha-140960-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 ssh -n ha-140960-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 cp ha-140960-m02:/home/docker/cp-test.txt ha-140960:/home/docker/cp-test_ha-140960-m02_ha-140960.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 ssh -n ha-140960-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 ssh -n ha-140960 "sudo cat /home/docker/cp-test_ha-140960-m02_ha-140960.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 cp ha-140960-m02:/home/docker/cp-test.txt ha-140960-m03:/home/docker/cp-test_ha-140960-m02_ha-140960-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 ssh -n ha-140960-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 ssh -n ha-140960-m03 "sudo cat /home/docker/cp-test_ha-140960-m02_ha-140960-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 cp ha-140960-m02:/home/docker/cp-test.txt ha-140960-m04:/home/docker/cp-test_ha-140960-m02_ha-140960-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 ssh -n ha-140960-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 ssh -n ha-140960-m04 "sudo cat /home/docker/cp-test_ha-140960-m02_ha-140960-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 cp testdata/cp-test.txt ha-140960-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 ssh -n ha-140960-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 cp ha-140960-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3293370621/001/cp-test_ha-140960-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 ssh -n ha-140960-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 cp ha-140960-m03:/home/docker/cp-test.txt ha-140960:/home/docker/cp-test_ha-140960-m03_ha-140960.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 ssh -n ha-140960-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 ssh -n ha-140960 "sudo cat /home/docker/cp-test_ha-140960-m03_ha-140960.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 cp ha-140960-m03:/home/docker/cp-test.txt ha-140960-m02:/home/docker/cp-test_ha-140960-m03_ha-140960-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 ssh -n ha-140960-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 ssh -n ha-140960-m02 "sudo cat /home/docker/cp-test_ha-140960-m03_ha-140960-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 cp ha-140960-m03:/home/docker/cp-test.txt ha-140960-m04:/home/docker/cp-test_ha-140960-m03_ha-140960-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 ssh -n ha-140960-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 ssh -n ha-140960-m04 "sudo cat /home/docker/cp-test_ha-140960-m03_ha-140960-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 cp testdata/cp-test.txt ha-140960-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 ssh -n ha-140960-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 cp ha-140960-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3293370621/001/cp-test_ha-140960-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 ssh -n ha-140960-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 cp ha-140960-m04:/home/docker/cp-test.txt ha-140960:/home/docker/cp-test_ha-140960-m04_ha-140960.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 ssh -n ha-140960-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 ssh -n ha-140960 "sudo cat /home/docker/cp-test_ha-140960-m04_ha-140960.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 cp ha-140960-m04:/home/docker/cp-test.txt ha-140960-m02:/home/docker/cp-test_ha-140960-m04_ha-140960-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 ssh -n ha-140960-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 ssh -n ha-140960-m02 "sudo cat /home/docker/cp-test_ha-140960-m04_ha-140960-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 cp ha-140960-m04:/home/docker/cp-test.txt ha-140960-m03:/home/docker/cp-test_ha-140960-m04_ha-140960-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 ssh -n ha-140960-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 ssh -n ha-140960-m03 "sudo cat /home/docker/cp-test_ha-140960-m04_ha-140960-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (14.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 node stop m02 --alsologtostderr -v 5
E0908 11:27:00.532011  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/functional-799296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-140960 node stop m02 --alsologtostderr -v 5: (12.331581034s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-140960 status --alsologtostderr -v 5: exit status 7 (697.201968ms)

                                                
                                                
-- stdout --
	ha-140960
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-140960-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-140960-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-140960-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 11:27:08.116536  380482 out.go:360] Setting OutFile to fd 1 ...
	I0908 11:27:08.116642  380482 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:27:08.116648  380482 out.go:374] Setting ErrFile to fd 2...
	I0908 11:27:08.116654  380482 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:27:08.116911  380482 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-360138/.minikube/bin
	I0908 11:27:08.117115  380482 out.go:368] Setting JSON to false
	I0908 11:27:08.117148  380482 mustload.go:65] Loading cluster: ha-140960
	I0908 11:27:08.117287  380482 notify.go:220] Checking for updates...
	I0908 11:27:08.117497  380482 config.go:182] Loaded profile config "ha-140960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 11:27:08.117518  380482 status.go:174] checking status of ha-140960 ...
	I0908 11:27:08.117945  380482 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21512-360138/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:27:08.118001  380482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:27:08.140291  380482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37691
	I0908 11:27:08.140892  380482 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:27:08.141511  380482 main.go:141] libmachine: Using API Version  1
	I0908 11:27:08.141544  380482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:27:08.142041  380482 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:27:08.142259  380482 main.go:141] libmachine: (ha-140960) Calling .GetState
	I0908 11:27:08.144182  380482 status.go:371] ha-140960 host status = "Running" (err=<nil>)
	I0908 11:27:08.144207  380482 host.go:66] Checking if "ha-140960" exists ...
	I0908 11:27:08.144529  380482 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21512-360138/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:27:08.144575  380482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:27:08.162281  380482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45433
	I0908 11:27:08.162921  380482 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:27:08.163522  380482 main.go:141] libmachine: Using API Version  1
	I0908 11:27:08.163559  380482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:27:08.163911  380482 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:27:08.164155  380482 main.go:141] libmachine: (ha-140960) Calling .GetIP
	I0908 11:27:08.167253  380482 main.go:141] libmachine: (ha-140960) DBG | domain ha-140960 has defined MAC address 52:54:00:5b:a1:76 in network mk-ha-140960
	I0908 11:27:08.167801  380482 main.go:141] libmachine: (ha-140960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:a1:76", ip: ""} in network mk-ha-140960: {Iface:virbr1 ExpiryTime:2025-09-08 12:21:50 +0000 UTC Type:0 Mac:52:54:00:5b:a1:76 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-140960 Clientid:01:52:54:00:5b:a1:76}
	I0908 11:27:08.167846  380482 main.go:141] libmachine: (ha-140960) DBG | domain ha-140960 has defined IP address 192.168.39.184 and MAC address 52:54:00:5b:a1:76 in network mk-ha-140960
	I0908 11:27:08.168040  380482 host.go:66] Checking if "ha-140960" exists ...
	I0908 11:27:08.168467  380482 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21512-360138/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:27:08.168522  380482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:27:08.185540  380482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45235
	I0908 11:27:08.186126  380482 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:27:08.186571  380482 main.go:141] libmachine: Using API Version  1
	I0908 11:27:08.186605  380482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:27:08.187013  380482 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:27:08.187209  380482 main.go:141] libmachine: (ha-140960) Calling .DriverName
	I0908 11:27:08.187427  380482 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 11:27:08.187457  380482 main.go:141] libmachine: (ha-140960) Calling .GetSSHHostname
	I0908 11:27:08.190585  380482 main.go:141] libmachine: (ha-140960) DBG | domain ha-140960 has defined MAC address 52:54:00:5b:a1:76 in network mk-ha-140960
	I0908 11:27:08.191074  380482 main.go:141] libmachine: (ha-140960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:a1:76", ip: ""} in network mk-ha-140960: {Iface:virbr1 ExpiryTime:2025-09-08 12:21:50 +0000 UTC Type:0 Mac:52:54:00:5b:a1:76 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:ha-140960 Clientid:01:52:54:00:5b:a1:76}
	I0908 11:27:08.191114  380482 main.go:141] libmachine: (ha-140960) DBG | domain ha-140960 has defined IP address 192.168.39.184 and MAC address 52:54:00:5b:a1:76 in network mk-ha-140960
	I0908 11:27:08.191321  380482 main.go:141] libmachine: (ha-140960) Calling .GetSSHPort
	I0908 11:27:08.191540  380482 main.go:141] libmachine: (ha-140960) Calling .GetSSHKeyPath
	I0908 11:27:08.191700  380482 main.go:141] libmachine: (ha-140960) Calling .GetSSHUsername
	I0908 11:27:08.191948  380482 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21512-360138/.minikube/machines/ha-140960/id_rsa Username:docker}
	I0908 11:27:08.275939  380482 ssh_runner.go:195] Run: systemctl --version
	I0908 11:27:08.283055  380482 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 11:27:08.302198  380482 kubeconfig.go:125] found "ha-140960" server: "https://192.168.39.254:8443"
	I0908 11:27:08.302239  380482 api_server.go:166] Checking apiserver status ...
	I0908 11:27:08.302277  380482 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 11:27:08.324343  380482 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2466/cgroup
	W0908 11:27:08.337700  380482 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2466/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0908 11:27:08.337769  380482 ssh_runner.go:195] Run: ls
	I0908 11:27:08.343518  380482 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0908 11:27:08.348405  380482 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0908 11:27:08.348438  380482 status.go:463] ha-140960 apiserver status = Running (err=<nil>)
	I0908 11:27:08.348452  380482 status.go:176] ha-140960 status: &{Name:ha-140960 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 11:27:08.348494  380482 status.go:174] checking status of ha-140960-m02 ...
	I0908 11:27:08.348817  380482 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21512-360138/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:27:08.348867  380482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:27:08.364670  380482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41951
	I0908 11:27:08.365283  380482 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:27:08.365830  380482 main.go:141] libmachine: Using API Version  1
	I0908 11:27:08.365855  380482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:27:08.366250  380482 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:27:08.366481  380482 main.go:141] libmachine: (ha-140960-m02) Calling .GetState
	I0908 11:27:08.368271  380482 status.go:371] ha-140960-m02 host status = "Stopped" (err=<nil>)
	I0908 11:27:08.368286  380482 status.go:384] host is not running, skipping remaining checks
	I0908 11:27:08.368292  380482 status.go:176] ha-140960-m02 status: &{Name:ha-140960-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 11:27:08.368309  380482 status.go:174] checking status of ha-140960-m03 ...
	I0908 11:27:08.368706  380482 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21512-360138/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:27:08.368840  380482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:27:08.386352  380482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38383
	I0908 11:27:08.386980  380482 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:27:08.387507  380482 main.go:141] libmachine: Using API Version  1
	I0908 11:27:08.387534  380482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:27:08.387929  380482 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:27:08.388144  380482 main.go:141] libmachine: (ha-140960-m03) Calling .GetState
	I0908 11:27:08.389918  380482 status.go:371] ha-140960-m03 host status = "Running" (err=<nil>)
	I0908 11:27:08.389940  380482 host.go:66] Checking if "ha-140960-m03" exists ...
	I0908 11:27:08.390381  380482 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21512-360138/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:27:08.390436  380482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:27:08.406802  380482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39923
	I0908 11:27:08.407487  380482 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:27:08.408095  380482 main.go:141] libmachine: Using API Version  1
	I0908 11:27:08.408132  380482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:27:08.408543  380482 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:27:08.408788  380482 main.go:141] libmachine: (ha-140960-m03) Calling .GetIP
	I0908 11:27:08.412241  380482 main.go:141] libmachine: (ha-140960-m03) DBG | domain ha-140960-m03 has defined MAC address 52:54:00:64:e1:c6 in network mk-ha-140960
	I0908 11:27:08.412798  380482 main.go:141] libmachine: (ha-140960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e1:c6", ip: ""} in network mk-ha-140960: {Iface:virbr1 ExpiryTime:2025-09-08 12:24:15 +0000 UTC Type:0 Mac:52:54:00:64:e1:c6 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-140960-m03 Clientid:01:52:54:00:64:e1:c6}
	I0908 11:27:08.412837  380482 main.go:141] libmachine: (ha-140960-m03) DBG | domain ha-140960-m03 has defined IP address 192.168.39.217 and MAC address 52:54:00:64:e1:c6 in network mk-ha-140960
	I0908 11:27:08.413127  380482 host.go:66] Checking if "ha-140960-m03" exists ...
	I0908 11:27:08.413485  380482 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21512-360138/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:27:08.413529  380482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:27:08.430187  380482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36311
	I0908 11:27:08.430751  380482 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:27:08.431334  380482 main.go:141] libmachine: Using API Version  1
	I0908 11:27:08.431370  380482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:27:08.431863  380482 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:27:08.432137  380482 main.go:141] libmachine: (ha-140960-m03) Calling .DriverName
	I0908 11:27:08.432360  380482 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 11:27:08.432391  380482 main.go:141] libmachine: (ha-140960-m03) Calling .GetSSHHostname
	I0908 11:27:08.436411  380482 main.go:141] libmachine: (ha-140960-m03) DBG | domain ha-140960-m03 has defined MAC address 52:54:00:64:e1:c6 in network mk-ha-140960
	I0908 11:27:08.436968  380482 main.go:141] libmachine: (ha-140960-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:e1:c6", ip: ""} in network mk-ha-140960: {Iface:virbr1 ExpiryTime:2025-09-08 12:24:15 +0000 UTC Type:0 Mac:52:54:00:64:e1:c6 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-140960-m03 Clientid:01:52:54:00:64:e1:c6}
	I0908 11:27:08.436998  380482 main.go:141] libmachine: (ha-140960-m03) DBG | domain ha-140960-m03 has defined IP address 192.168.39.217 and MAC address 52:54:00:64:e1:c6 in network mk-ha-140960
	I0908 11:27:08.437292  380482 main.go:141] libmachine: (ha-140960-m03) Calling .GetSSHPort
	I0908 11:27:08.437546  380482 main.go:141] libmachine: (ha-140960-m03) Calling .GetSSHKeyPath
	I0908 11:27:08.437759  380482 main.go:141] libmachine: (ha-140960-m03) Calling .GetSSHUsername
	I0908 11:27:08.438079  380482 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21512-360138/.minikube/machines/ha-140960-m03/id_rsa Username:docker}
	I0908 11:27:08.524798  380482 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 11:27:08.542950  380482 kubeconfig.go:125] found "ha-140960" server: "https://192.168.39.254:8443"
	I0908 11:27:08.542998  380482 api_server.go:166] Checking apiserver status ...
	I0908 11:27:08.543055  380482 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 11:27:08.563506  380482 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2268/cgroup
	W0908 11:27:08.576634  380482 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2268/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0908 11:27:08.576718  380482 ssh_runner.go:195] Run: ls
	I0908 11:27:08.582756  380482 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0908 11:27:08.589530  380482 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0908 11:27:08.589577  380482 status.go:463] ha-140960-m03 apiserver status = Running (err=<nil>)
	I0908 11:27:08.589590  380482 status.go:176] ha-140960-m03 status: &{Name:ha-140960-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 11:27:08.589613  380482 status.go:174] checking status of ha-140960-m04 ...
	I0908 11:27:08.590016  380482 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21512-360138/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:27:08.590077  380482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:27:08.606395  380482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38989
	I0908 11:27:08.606934  380482 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:27:08.607504  380482 main.go:141] libmachine: Using API Version  1
	I0908 11:27:08.607529  380482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:27:08.608062  380482 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:27:08.608304  380482 main.go:141] libmachine: (ha-140960-m04) Calling .GetState
	I0908 11:27:08.610120  380482 status.go:371] ha-140960-m04 host status = "Running" (err=<nil>)
	I0908 11:27:08.610141  380482 host.go:66] Checking if "ha-140960-m04" exists ...
	I0908 11:27:08.610549  380482 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21512-360138/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:27:08.610602  380482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:27:08.626964  380482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41049
	I0908 11:27:08.627499  380482 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:27:08.628050  380482 main.go:141] libmachine: Using API Version  1
	I0908 11:27:08.628080  380482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:27:08.628469  380482 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:27:08.628681  380482 main.go:141] libmachine: (ha-140960-m04) Calling .GetIP
	I0908 11:27:08.631786  380482 main.go:141] libmachine: (ha-140960-m04) DBG | domain ha-140960-m04 has defined MAC address 52:54:00:b7:4b:a2 in network mk-ha-140960
	I0908 11:27:08.632371  380482 main.go:141] libmachine: (ha-140960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:4b:a2", ip: ""} in network mk-ha-140960: {Iface:virbr1 ExpiryTime:2025-09-08 12:26:01 +0000 UTC Type:0 Mac:52:54:00:b7:4b:a2 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-140960-m04 Clientid:01:52:54:00:b7:4b:a2}
	I0908 11:27:08.632393  380482 main.go:141] libmachine: (ha-140960-m04) DBG | domain ha-140960-m04 has defined IP address 192.168.39.238 and MAC address 52:54:00:b7:4b:a2 in network mk-ha-140960
	I0908 11:27:08.632681  380482 host.go:66] Checking if "ha-140960-m04" exists ...
	I0908 11:27:08.633106  380482 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21512-360138/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:27:08.633157  380482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:27:08.649433  380482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44739
	I0908 11:27:08.649964  380482 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:27:08.650397  380482 main.go:141] libmachine: Using API Version  1
	I0908 11:27:08.650420  380482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:27:08.650865  380482 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:27:08.651070  380482 main.go:141] libmachine: (ha-140960-m04) Calling .DriverName
	I0908 11:27:08.651256  380482 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 11:27:08.651276  380482 main.go:141] libmachine: (ha-140960-m04) Calling .GetSSHHostname
	I0908 11:27:08.653760  380482 main.go:141] libmachine: (ha-140960-m04) DBG | domain ha-140960-m04 has defined MAC address 52:54:00:b7:4b:a2 in network mk-ha-140960
	I0908 11:27:08.654155  380482 main.go:141] libmachine: (ha-140960-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:4b:a2", ip: ""} in network mk-ha-140960: {Iface:virbr1 ExpiryTime:2025-09-08 12:26:01 +0000 UTC Type:0 Mac:52:54:00:b7:4b:a2 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-140960-m04 Clientid:01:52:54:00:b7:4b:a2}
	I0908 11:27:08.654184  380482 main.go:141] libmachine: (ha-140960-m04) DBG | domain ha-140960-m04 has defined IP address 192.168.39.238 and MAC address 52:54:00:b7:4b:a2 in network mk-ha-140960
	I0908 11:27:08.654387  380482 main.go:141] libmachine: (ha-140960-m04) Calling .GetSSHPort
	I0908 11:27:08.654592  380482 main.go:141] libmachine: (ha-140960-m04) Calling .GetSSHKeyPath
	I0908 11:27:08.654841  380482 main.go:141] libmachine: (ha-140960-m04) Calling .GetSSHUsername
	I0908 11:27:08.655023  380482 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21512-360138/.minikube/machines/ha-140960-m04/id_rsa Username:docker}
	I0908 11:27:08.744420  380482 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 11:27:08.761895  380482 status.go:176] ha-140960-m04 status: &{Name:ha-140960-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (29.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-140960 node start m02 --alsologtostderr -v 5: (28.235376453s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-140960 status --alsologtostderr -v 5: (1.117461459s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (29.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.122368939s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (167.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 stop --alsologtostderr -v 5
E0908 11:27:41.494227  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/functional-799296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-140960 stop --alsologtostderr -v 5: (39.253195057s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 start --wait true --alsologtostderr -v 5
E0908 11:29:03.418999  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/functional-799296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:29:04.601957  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/addons-733032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-140960 start --wait true --alsologtostderr -v 5: (2m7.860722927s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (167.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (7.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-140960 node delete m03 --alsologtostderr -v 5: (6.982790313s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (7.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-140960 stop --alsologtostderr -v 5: (36.264581133s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-140960 status --alsologtostderr -v 5: exit status 7 (113.153075ms)

                                                
                                                
-- stdout --
	ha-140960
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-140960-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-140960-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 11:31:12.146105  382650 out.go:360] Setting OutFile to fd 1 ...
	I0908 11:31:12.146382  382650 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:31:12.146397  382650 out.go:374] Setting ErrFile to fd 2...
	I0908 11:31:12.146401  382650 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:31:12.146680  382650 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-360138/.minikube/bin
	I0908 11:31:12.146899  382650 out.go:368] Setting JSON to false
	I0908 11:31:12.146936  382650 mustload.go:65] Loading cluster: ha-140960
	I0908 11:31:12.146989  382650 notify.go:220] Checking for updates...
	I0908 11:31:12.147317  382650 config.go:182] Loaded profile config "ha-140960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 11:31:12.147338  382650 status.go:174] checking status of ha-140960 ...
	I0908 11:31:12.148529  382650 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21512-360138/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:31:12.148631  382650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:31:12.164971  382650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32837
	I0908 11:31:12.165559  382650 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:31:12.166140  382650 main.go:141] libmachine: Using API Version  1
	I0908 11:31:12.166174  382650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:31:12.166763  382650 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:31:12.167008  382650 main.go:141] libmachine: (ha-140960) Calling .GetState
	I0908 11:31:12.168666  382650 status.go:371] ha-140960 host status = "Stopped" (err=<nil>)
	I0908 11:31:12.168682  382650 status.go:384] host is not running, skipping remaining checks
	I0908 11:31:12.168688  382650 status.go:176] ha-140960 status: &{Name:ha-140960 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 11:31:12.168733  382650 status.go:174] checking status of ha-140960-m02 ...
	I0908 11:31:12.169056  382650 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21512-360138/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:31:12.169116  382650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:31:12.184356  382650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35405
	I0908 11:31:12.184854  382650 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:31:12.185394  382650 main.go:141] libmachine: Using API Version  1
	I0908 11:31:12.185425  382650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:31:12.185728  382650 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:31:12.185896  382650 main.go:141] libmachine: (ha-140960-m02) Calling .GetState
	I0908 11:31:12.187476  382650 status.go:371] ha-140960-m02 host status = "Stopped" (err=<nil>)
	I0908 11:31:12.187495  382650 status.go:384] host is not running, skipping remaining checks
	I0908 11:31:12.187501  382650 status.go:176] ha-140960-m02 status: &{Name:ha-140960-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 11:31:12.187518  382650 status.go:174] checking status of ha-140960-m04 ...
	I0908 11:31:12.187824  382650 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21512-360138/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:31:12.187864  382650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:31:12.203388  382650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45049
	I0908 11:31:12.203948  382650 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:31:12.204498  382650 main.go:141] libmachine: Using API Version  1
	I0908 11:31:12.204521  382650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:31:12.204825  382650 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:31:12.205009  382650 main.go:141] libmachine: (ha-140960-m04) Calling .GetState
	I0908 11:31:12.206793  382650 status.go:371] ha-140960-m04 host status = "Stopped" (err=<nil>)
	I0908 11:31:12.206815  382650 status.go:384] host is not running, skipping remaining checks
	I0908 11:31:12.206823  382650 status.go:176] ha-140960-m04 status: &{Name:ha-140960-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (134.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 start --wait true --alsologtostderr -v 5 --driver=kvm2 
E0908 11:31:19.555514  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/functional-799296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:31:47.261059  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/functional-799296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-140960 start --wait true --alsologtostderr -v 5 --driver=kvm2 : (2m13.24031093s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (134.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (97.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 node add --control-plane --alsologtostderr -v 5
E0908 11:34:04.602925  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/addons-733032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-140960 node add --control-plane --alsologtostderr -v 5: (1m36.772084484s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-140960 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (97.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.97s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (51.33s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-461866 --driver=kvm2 
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-461866 --driver=kvm2 : (51.326271111s)
--- PASS: TestImageBuild/serial/Setup (51.33s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.69s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-461866
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-461866: (1.689392701s)
--- PASS: TestImageBuild/serial/NormalBuild (1.69s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.07s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-461866
image_test.go:99: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-461866: (1.074315234s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.07s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.72s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-461866
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.72s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.1s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-461866
image_test.go:88: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-461866: (1.095022797s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.10s)

                                                
                                    
x
+
TestJSONOutput/start/Command (95.53s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-582872 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2 
E0908 11:36:19.559304  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/functional-799296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-582872 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2 : (1m35.525867672s)
--- PASS: TestJSONOutput/start/Command (95.53s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-582872 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-582872 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (12.45s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-582872 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-582872 --output=json --user=testUser: (12.447688895s)
--- PASS: TestJSONOutput/stop/Command (12.45s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-322803 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-322803 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (76.652559ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"27939ee7-c5da-490a-b53b-c0482c24cc86","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-322803] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1286d11e-0673-437d-8165-540824aa519f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21512"}}
	{"specversion":"1.0","id":"92eedfa3-9565-449b-a047-397b3e9ac72f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6ab23df4-26c3-4dbb-ab65-690953c85844","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21512-360138/kubeconfig"}}
	{"specversion":"1.0","id":"0d14fe12-5d5b-4996-8d83-6dae984d5f12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-360138/.minikube"}}
	{"specversion":"1.0","id":"4af0dff0-070d-4808-975d-097563bd4ada","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"d336079c-0a13-4d63-86ba-a2cfc0eef6c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e6ed2729-355c-4908-add5-9befff788ddf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-322803" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-322803
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (106.59s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-360036 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-360036 --driver=kvm2 : (49.615504046s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-371713 --driver=kvm2 
E0908 11:39:04.606206  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/addons-733032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-371713 --driver=kvm2 : (53.731600829s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-360036
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-371713
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-371713" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-371713
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-371713: (1.016702546s)
helpers_test.go:175: Cleaning up "first-360036" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-360036
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-360036: (1.014782151s)
--- PASS: TestMinikubeProfile (106.59s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (29.37s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-833309 --memory=3072 --mount-string /tmp/TestMountStartserial2374327508/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-833309 --memory=3072 --mount-string /tmp/TestMountStartserial2374327508/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (28.364962434s)
--- PASS: TestMountStart/serial/StartWithMountFirst (29.37s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-833309 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-833309 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (30.18s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-847449 --memory=3072 --mount-string /tmp/TestMountStartserial2374327508/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-847449 --memory=3072 --mount-string /tmp/TestMountStartserial2374327508/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (29.18112087s)
--- PASS: TestMountStart/serial/StartWithMountSecond (30.18s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-847449 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-847449 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.40s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-833309 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-847449 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-847449 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.40s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.4s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-847449
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-847449: (1.395605353s)
--- PASS: TestMountStart/serial/Stop (1.40s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (24.61s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-847449
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-847449: (23.606799473s)
--- PASS: TestMountStart/serial/RestartStopped (24.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-847449 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-847449 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (133.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-505972 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2 
E0908 11:41:19.556036  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/functional-799296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:42:07.674241  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/addons-733032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:42:42.625348  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/functional-799296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-505972 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2 : (2m12.664667587s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505972 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (133.15s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-505972 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-505972 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-505972 -- rollout status deployment/busybox: (3.739934665s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-505972 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-505972 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-505972 -- exec busybox-7b57f96db7-9p8bk -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-505972 -- exec busybox-7b57f96db7-k6wwg -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-505972 -- exec busybox-7b57f96db7-9p8bk -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-505972 -- exec busybox-7b57f96db7-k6wwg -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-505972 -- exec busybox-7b57f96db7-9p8bk -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-505972 -- exec busybox-7b57f96db7-k6wwg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.48s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-505972 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-505972 -- exec busybox-7b57f96db7-9p8bk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-505972 -- exec busybox-7b57f96db7-9p8bk -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-505972 -- exec busybox-7b57f96db7-k6wwg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-505972 -- exec busybox-7b57f96db7-k6wwg -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.93s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (55s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-505972 -v=5 --alsologtostderr
E0908 11:44:04.603038  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/addons-733032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-505972 -v=5 --alsologtostderr: (54.364813873s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505972 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (55.00s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-505972 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.65s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505972 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505972 cp testdata/cp-test.txt multinode-505972:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505972 ssh -n multinode-505972 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505972 cp multinode-505972:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3696036330/001/cp-test_multinode-505972.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505972 ssh -n multinode-505972 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505972 cp multinode-505972:/home/docker/cp-test.txt multinode-505972-m02:/home/docker/cp-test_multinode-505972_multinode-505972-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505972 ssh -n multinode-505972 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505972 ssh -n multinode-505972-m02 "sudo cat /home/docker/cp-test_multinode-505972_multinode-505972-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505972 cp multinode-505972:/home/docker/cp-test.txt multinode-505972-m03:/home/docker/cp-test_multinode-505972_multinode-505972-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505972 ssh -n multinode-505972 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505972 ssh -n multinode-505972-m03 "sudo cat /home/docker/cp-test_multinode-505972_multinode-505972-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505972 cp testdata/cp-test.txt multinode-505972-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505972 ssh -n multinode-505972-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505972 cp multinode-505972-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3696036330/001/cp-test_multinode-505972-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505972 ssh -n multinode-505972-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505972 cp multinode-505972-m02:/home/docker/cp-test.txt multinode-505972:/home/docker/cp-test_multinode-505972-m02_multinode-505972.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505972 ssh -n multinode-505972-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505972 ssh -n multinode-505972 "sudo cat /home/docker/cp-test_multinode-505972-m02_multinode-505972.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505972 cp multinode-505972-m02:/home/docker/cp-test.txt multinode-505972-m03:/home/docker/cp-test_multinode-505972-m02_multinode-505972-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505972 ssh -n multinode-505972-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505972 ssh -n multinode-505972-m03 "sudo cat /home/docker/cp-test_multinode-505972-m02_multinode-505972-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505972 cp testdata/cp-test.txt multinode-505972-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505972 ssh -n multinode-505972-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505972 cp multinode-505972-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3696036330/001/cp-test_multinode-505972-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505972 ssh -n multinode-505972-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505972 cp multinode-505972-m03:/home/docker/cp-test.txt multinode-505972:/home/docker/cp-test_multinode-505972-m03_multinode-505972.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505972 ssh -n multinode-505972-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505972 ssh -n multinode-505972 "sudo cat /home/docker/cp-test_multinode-505972-m03_multinode-505972.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505972 cp multinode-505972-m03:/home/docker/cp-test.txt multinode-505972-m02:/home/docker/cp-test_multinode-505972-m03_multinode-505972-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505972 ssh -n multinode-505972-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505972 ssh -n multinode-505972-m02 "sudo cat /home/docker/cp-test_multinode-505972-m03_multinode-505972-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.09s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505972 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-505972 node stop m03: (2.330158066s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505972 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-505972 status: exit status 7 (476.602286ms)

                                                
                                                
-- stdout --
	multinode-505972
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-505972-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-505972-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505972 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-505972 status --alsologtostderr: exit status 7 (468.716318ms)

                                                
                                                
-- stdout --
	multinode-505972
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-505972-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-505972-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 11:44:37.814085  391326 out.go:360] Setting OutFile to fd 1 ...
	I0908 11:44:37.814565  391326 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:44:37.814581  391326 out.go:374] Setting ErrFile to fd 2...
	I0908 11:44:37.814588  391326 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:44:37.815149  391326 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-360138/.minikube/bin
	I0908 11:44:37.815497  391326 out.go:368] Setting JSON to false
	I0908 11:44:37.815552  391326 mustload.go:65] Loading cluster: multinode-505972
	I0908 11:44:37.815637  391326 notify.go:220] Checking for updates...
	I0908 11:44:37.816781  391326 config.go:182] Loaded profile config "multinode-505972": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 11:44:37.816817  391326 status.go:174] checking status of multinode-505972 ...
	I0908 11:44:37.817301  391326 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21512-360138/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:44:37.817347  391326 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:44:37.837504  391326 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35629
	I0908 11:44:37.838163  391326 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:44:37.838867  391326 main.go:141] libmachine: Using API Version  1
	I0908 11:44:37.838899  391326 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:44:37.839378  391326 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:44:37.839617  391326 main.go:141] libmachine: (multinode-505972) Calling .GetState
	I0908 11:44:37.841220  391326 status.go:371] multinode-505972 host status = "Running" (err=<nil>)
	I0908 11:44:37.841246  391326 host.go:66] Checking if "multinode-505972" exists ...
	I0908 11:44:37.841685  391326 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21512-360138/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:44:37.841754  391326 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:44:37.858821  391326 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35031
	I0908 11:44:37.859423  391326 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:44:37.860008  391326 main.go:141] libmachine: Using API Version  1
	I0908 11:44:37.860048  391326 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:44:37.860428  391326 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:44:37.860651  391326 main.go:141] libmachine: (multinode-505972) Calling .GetIP
	I0908 11:44:37.863896  391326 main.go:141] libmachine: (multinode-505972) DBG | domain multinode-505972 has defined MAC address 52:54:00:5a:65:f3 in network mk-multinode-505972
	I0908 11:44:37.864412  391326 main.go:141] libmachine: (multinode-505972) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:65:f3", ip: ""} in network mk-multinode-505972: {Iface:virbr1 ExpiryTime:2025-09-08 12:41:26 +0000 UTC Type:0 Mac:52:54:00:5a:65:f3 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-505972 Clientid:01:52:54:00:5a:65:f3}
	I0908 11:44:37.864446  391326 main.go:141] libmachine: (multinode-505972) DBG | domain multinode-505972 has defined IP address 192.168.39.168 and MAC address 52:54:00:5a:65:f3 in network mk-multinode-505972
	I0908 11:44:37.864641  391326 host.go:66] Checking if "multinode-505972" exists ...
	I0908 11:44:37.865175  391326 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21512-360138/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:44:37.865269  391326 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:44:37.881715  391326 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40615
	I0908 11:44:37.882306  391326 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:44:37.882832  391326 main.go:141] libmachine: Using API Version  1
	I0908 11:44:37.882856  391326 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:44:37.883277  391326 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:44:37.883518  391326 main.go:141] libmachine: (multinode-505972) Calling .DriverName
	I0908 11:44:37.883750  391326 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 11:44:37.883784  391326 main.go:141] libmachine: (multinode-505972) Calling .GetSSHHostname
	I0908 11:44:37.887519  391326 main.go:141] libmachine: (multinode-505972) DBG | domain multinode-505972 has defined MAC address 52:54:00:5a:65:f3 in network mk-multinode-505972
	I0908 11:44:37.888010  391326 main.go:141] libmachine: (multinode-505972) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:65:f3", ip: ""} in network mk-multinode-505972: {Iface:virbr1 ExpiryTime:2025-09-08 12:41:26 +0000 UTC Type:0 Mac:52:54:00:5a:65:f3 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:multinode-505972 Clientid:01:52:54:00:5a:65:f3}
	I0908 11:44:37.888058  391326 main.go:141] libmachine: (multinode-505972) DBG | domain multinode-505972 has defined IP address 192.168.39.168 and MAC address 52:54:00:5a:65:f3 in network mk-multinode-505972
	I0908 11:44:37.888202  391326 main.go:141] libmachine: (multinode-505972) Calling .GetSSHPort
	I0908 11:44:37.888417  391326 main.go:141] libmachine: (multinode-505972) Calling .GetSSHKeyPath
	I0908 11:44:37.888581  391326 main.go:141] libmachine: (multinode-505972) Calling .GetSSHUsername
	I0908 11:44:37.888755  391326 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21512-360138/.minikube/machines/multinode-505972/id_rsa Username:docker}
	I0908 11:44:37.967911  391326 ssh_runner.go:195] Run: systemctl --version
	I0908 11:44:37.975366  391326 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 11:44:37.992720  391326 kubeconfig.go:125] found "multinode-505972" server: "https://192.168.39.168:8443"
	I0908 11:44:37.992763  391326 api_server.go:166] Checking apiserver status ...
	I0908 11:44:37.992814  391326 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 11:44:38.014781  391326 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2432/cgroup
	W0908 11:44:38.027679  391326 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2432/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0908 11:44:38.027752  391326 ssh_runner.go:195] Run: ls
	I0908 11:44:38.034143  391326 api_server.go:253] Checking apiserver healthz at https://192.168.39.168:8443/healthz ...
	I0908 11:44:38.039573  391326 api_server.go:279] https://192.168.39.168:8443/healthz returned 200:
	ok
	I0908 11:44:38.039609  391326 status.go:463] multinode-505972 apiserver status = Running (err=<nil>)
	I0908 11:44:38.039622  391326 status.go:176] multinode-505972 status: &{Name:multinode-505972 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 11:44:38.039639  391326 status.go:174] checking status of multinode-505972-m02 ...
	I0908 11:44:38.040277  391326 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21512-360138/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:44:38.040368  391326 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:44:38.057986  391326 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40133
	I0908 11:44:38.058606  391326 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:44:38.059179  391326 main.go:141] libmachine: Using API Version  1
	I0908 11:44:38.059204  391326 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:44:38.059588  391326 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:44:38.059822  391326 main.go:141] libmachine: (multinode-505972-m02) Calling .GetState
	I0908 11:44:38.061633  391326 status.go:371] multinode-505972-m02 host status = "Running" (err=<nil>)
	I0908 11:44:38.061659  391326 host.go:66] Checking if "multinode-505972-m02" exists ...
	I0908 11:44:38.062223  391326 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21512-360138/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:44:38.062289  391326 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:44:38.079916  391326 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44185
	I0908 11:44:38.080509  391326 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:44:38.081150  391326 main.go:141] libmachine: Using API Version  1
	I0908 11:44:38.081172  391326 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:44:38.081504  391326 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:44:38.081701  391326 main.go:141] libmachine: (multinode-505972-m02) Calling .GetIP
	I0908 11:44:38.085014  391326 main.go:141] libmachine: (multinode-505972-m02) DBG | domain multinode-505972-m02 has defined MAC address 52:54:00:77:56:dc in network mk-multinode-505972
	I0908 11:44:38.085531  391326 main.go:141] libmachine: (multinode-505972-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:56:dc", ip: ""} in network mk-multinode-505972: {Iface:virbr1 ExpiryTime:2025-09-08 12:42:42 +0000 UTC Type:0 Mac:52:54:00:77:56:dc Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:multinode-505972-m02 Clientid:01:52:54:00:77:56:dc}
	I0908 11:44:38.085572  391326 main.go:141] libmachine: (multinode-505972-m02) DBG | domain multinode-505972-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:77:56:dc in network mk-multinode-505972
	I0908 11:44:38.085795  391326 host.go:66] Checking if "multinode-505972-m02" exists ...
	I0908 11:44:38.086267  391326 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21512-360138/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:44:38.086340  391326 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:44:38.102880  391326 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43203
	I0908 11:44:38.103359  391326 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:44:38.103881  391326 main.go:141] libmachine: Using API Version  1
	I0908 11:44:38.103924  391326 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:44:38.104371  391326 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:44:38.104631  391326 main.go:141] libmachine: (multinode-505972-m02) Calling .DriverName
	I0908 11:44:38.104905  391326 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 11:44:38.104930  391326 main.go:141] libmachine: (multinode-505972-m02) Calling .GetSSHHostname
	I0908 11:44:38.108041  391326 main.go:141] libmachine: (multinode-505972-m02) DBG | domain multinode-505972-m02 has defined MAC address 52:54:00:77:56:dc in network mk-multinode-505972
	I0908 11:44:38.108501  391326 main.go:141] libmachine: (multinode-505972-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:56:dc", ip: ""} in network mk-multinode-505972: {Iface:virbr1 ExpiryTime:2025-09-08 12:42:42 +0000 UTC Type:0 Mac:52:54:00:77:56:dc Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:multinode-505972-m02 Clientid:01:52:54:00:77:56:dc}
	I0908 11:44:38.108532  391326 main.go:141] libmachine: (multinode-505972-m02) DBG | domain multinode-505972-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:77:56:dc in network mk-multinode-505972
	I0908 11:44:38.108719  391326 main.go:141] libmachine: (multinode-505972-m02) Calling .GetSSHPort
	I0908 11:44:38.108964  391326 main.go:141] libmachine: (multinode-505972-m02) Calling .GetSSHKeyPath
	I0908 11:44:38.109126  391326 main.go:141] libmachine: (multinode-505972-m02) Calling .GetSSHUsername
	I0908 11:44:38.109274  391326 sshutil.go:53] new ssh client: &{IP:192.168.39.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21512-360138/.minikube/machines/multinode-505972-m02/id_rsa Username:docker}
	I0908 11:44:38.190913  391326 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 11:44:38.208815  391326 status.go:176] multinode-505972-m02 status: &{Name:multinode-505972-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0908 11:44:38.208866  391326 status.go:174] checking status of multinode-505972-m03 ...
	I0908 11:44:38.209239  391326 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21512-360138/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:44:38.209294  391326 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:44:38.226026  391326 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35795
	I0908 11:44:38.226577  391326 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:44:38.227213  391326 main.go:141] libmachine: Using API Version  1
	I0908 11:44:38.227250  391326 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:44:38.227697  391326 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:44:38.227960  391326 main.go:141] libmachine: (multinode-505972-m03) Calling .GetState
	I0908 11:44:38.229566  391326 status.go:371] multinode-505972-m03 host status = "Stopped" (err=<nil>)
	I0908 11:44:38.229587  391326 status.go:384] host is not running, skipping remaining checks
	I0908 11:44:38.229594  391326 status.go:176] multinode-505972-m03 status: &{Name:multinode-505972-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.28s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505972 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-505972 node start m03 -v=5 --alsologtostderr: (39.049390123s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505972 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.74s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (178.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-505972
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-505972
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-505972: (26.843437169s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-505972 --wait=true -v=5 --alsologtostderr
E0908 11:46:19.556020  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/functional-799296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-505972 --wait=true -v=5 --alsologtostderr: (2m31.962510545s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-505972
--- PASS: TestMultiNode/serial/RestartKeepsNodes (178.92s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505972 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-505972 node delete m03: (1.886040536s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505972 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.46s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505972 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-505972 stop: (23.860057224s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505972 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-505972 status: exit status 7 (98.562307ms)

                                                
                                                
-- stdout --
	multinode-505972
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-505972-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505972 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-505972 status --alsologtostderr: exit status 7 (98.963799ms)

                                                
                                                
-- stdout --
	multinode-505972
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-505972-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 11:48:43.363564  393079 out.go:360] Setting OutFile to fd 1 ...
	I0908 11:48:43.363709  393079 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:48:43.363718  393079 out.go:374] Setting ErrFile to fd 2...
	I0908 11:48:43.363728  393079 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 11:48:43.363938  393079 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-360138/.minikube/bin
	I0908 11:48:43.364103  393079 out.go:368] Setting JSON to false
	I0908 11:48:43.364132  393079 mustload.go:65] Loading cluster: multinode-505972
	I0908 11:48:43.364260  393079 notify.go:220] Checking for updates...
	I0908 11:48:43.364544  393079 config.go:182] Loaded profile config "multinode-505972": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0908 11:48:43.364567  393079 status.go:174] checking status of multinode-505972 ...
	I0908 11:48:43.365105  393079 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21512-360138/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:48:43.365158  393079 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:48:43.385694  393079 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44903
	I0908 11:48:43.386375  393079 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:48:43.387143  393079 main.go:141] libmachine: Using API Version  1
	I0908 11:48:43.387175  393079 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:48:43.387562  393079 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:48:43.387856  393079 main.go:141] libmachine: (multinode-505972) Calling .GetState
	I0908 11:48:43.389580  393079 status.go:371] multinode-505972 host status = "Stopped" (err=<nil>)
	I0908 11:48:43.389605  393079 status.go:384] host is not running, skipping remaining checks
	I0908 11:48:43.389621  393079 status.go:176] multinode-505972 status: &{Name:multinode-505972 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 11:48:43.389644  393079 status.go:174] checking status of multinode-505972-m02 ...
	I0908 11:48:43.390023  393079 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21512-360138/.minikube/bin/docker-machine-driver-kvm2
	I0908 11:48:43.390068  393079 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0908 11:48:43.406035  393079 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46641
	I0908 11:48:43.406516  393079 main.go:141] libmachine: () Calling .GetVersion
	I0908 11:48:43.407048  393079 main.go:141] libmachine: Using API Version  1
	I0908 11:48:43.407092  393079 main.go:141] libmachine: () Calling .SetConfigRaw
	I0908 11:48:43.407494  393079 main.go:141] libmachine: () Calling .GetMachineName
	I0908 11:48:43.407699  393079 main.go:141] libmachine: (multinode-505972-m02) Calling .GetState
	I0908 11:48:43.409438  393079 status.go:371] multinode-505972-m02 host status = "Stopped" (err=<nil>)
	I0908 11:48:43.409452  393079 status.go:384] host is not running, skipping remaining checks
	I0908 11:48:43.409457  393079 status.go:176] multinode-505972-m02 status: &{Name:multinode-505972-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.06s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (132.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-505972 --wait=true -v=5 --alsologtostderr --driver=kvm2 
E0908 11:49:04.602425  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/addons-733032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-505972 --wait=true -v=5 --alsologtostderr --driver=kvm2 : (2m12.171749257s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-505972 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (132.76s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (53.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-505972
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-505972-m02 --driver=kvm2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-505972-m02 --driver=kvm2 : exit status 14 (72.443394ms)

                                                
                                                
-- stdout --
	* [multinode-505972-m02] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21512
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21512-360138/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-360138/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-505972-m02' is duplicated with machine name 'multinode-505972-m02' in profile 'multinode-505972'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-505972-m03 --driver=kvm2 
E0908 11:51:19.559469  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/functional-799296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-505972-m03 --driver=kvm2 : (51.859830517s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-505972
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-505972: exit status 80 (241.547061ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-505972 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-505972-m03 already exists in multinode-505972-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-505972-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (53.08s)

                                                
                                    
x
+
TestPreload (159.39s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-143712 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-143712 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.32.0: (1m40.759724736s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-143712 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-143712 image pull gcr.io/k8s-minikube/busybox: (2.247270664s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-143712
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-143712: (7.310684077s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-143712 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2 
E0908 11:54:04.602940  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/addons-733032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-143712 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2 : (47.784876828s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-143712 image list
helpers_test.go:175: Cleaning up "test-preload-143712" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-143712
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-143712: (1.076693476s)
--- PASS: TestPreload (159.39s)

                                                
                                    
x
+
TestScheduledStopUnix (121.94s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-089750 --memory=3072 --driver=kvm2 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-089750 --memory=3072 --driver=kvm2 : (50.114207525s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-089750 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-089750 -n scheduled-stop-089750
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-089750 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0908 11:55:20.769679  364318 retry.go:31] will retry after 108.301µs: open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/scheduled-stop-089750/pid: no such file or directory
I0908 11:55:20.770857  364318 retry.go:31] will retry after 134.438µs: open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/scheduled-stop-089750/pid: no such file or directory
I0908 11:55:20.771970  364318 retry.go:31] will retry after 132.721µs: open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/scheduled-stop-089750/pid: no such file or directory
I0908 11:55:20.773111  364318 retry.go:31] will retry after 451.34µs: open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/scheduled-stop-089750/pid: no such file or directory
I0908 11:55:20.774254  364318 retry.go:31] will retry after 373.67µs: open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/scheduled-stop-089750/pid: no such file or directory
I0908 11:55:20.775386  364318 retry.go:31] will retry after 716.592µs: open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/scheduled-stop-089750/pid: no such file or directory
I0908 11:55:20.776518  364318 retry.go:31] will retry after 834.052µs: open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/scheduled-stop-089750/pid: no such file or directory
I0908 11:55:20.777676  364318 retry.go:31] will retry after 2.382732ms: open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/scheduled-stop-089750/pid: no such file or directory
I0908 11:55:20.781020  364318 retry.go:31] will retry after 2.74588ms: open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/scheduled-stop-089750/pid: no such file or directory
I0908 11:55:20.784403  364318 retry.go:31] will retry after 3.075417ms: open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/scheduled-stop-089750/pid: no such file or directory
I0908 11:55:20.787585  364318 retry.go:31] will retry after 2.937581ms: open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/scheduled-stop-089750/pid: no such file or directory
I0908 11:55:20.790848  364318 retry.go:31] will retry after 6.312769ms: open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/scheduled-stop-089750/pid: no such file or directory
I0908 11:55:20.798156  364318 retry.go:31] will retry after 15.979524ms: open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/scheduled-stop-089750/pid: no such file or directory
I0908 11:55:20.814438  364318 retry.go:31] will retry after 28.635746ms: open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/scheduled-stop-089750/pid: no such file or directory
I0908 11:55:20.843722  364318 retry.go:31] will retry after 35.026857ms: open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/scheduled-stop-089750/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-089750 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-089750 -n scheduled-stop-089750
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-089750
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-089750 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0908 11:56:19.560309  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/functional-799296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-089750
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-089750: exit status 7 (78.783003ms)

                                                
                                                
-- stdout --
	scheduled-stop-089750
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-089750 -n scheduled-stop-089750
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-089750 -n scheduled-stop-089750: exit status 7 (78.694098ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-089750" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-089750
--- PASS: TestScheduledStopUnix (121.94s)

                                                
                                    
x
+
TestSkaffold (136.61s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe4142163362 version
skaffold_test.go:63: skaffold version: v2.16.1
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-756868 --memory=3072 --driver=kvm2 
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-756868 --memory=3072 --driver=kvm2 : (53.147508604s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe4142163362 run --minikube-profile skaffold-756868 --kube-context skaffold-756868 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe4142163362 run --minikube-profile skaffold-756868 --kube-context skaffold-756868 --status-check=true --port-forward=false --interactive=false: (1m10.307681513s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:352: "leeroy-app-96669c9d5-d57l8" [f7672457-f3ec-44d7-8505-98e3c4b86923] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003869775s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:352: "leeroy-web-6b9bdb8448-bhx2h" [c6110c13-064f-4583-ab81-9da9c5adfa4e] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.004143846s
helpers_test.go:175: Cleaning up "skaffold-756868" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-756868
E0908 11:58:47.676623  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/addons-733032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-756868: (1.198857806s)
--- PASS: TestSkaffold (136.61s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (182.17s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.1352055804 start -p running-upgrade-105315 --memory=3072 --vm-driver=kvm2 
E0908 11:59:04.601968  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/addons-733032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 11:59:22.627731  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/functional-799296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.1352055804 start -p running-upgrade-105315 --memory=3072 --vm-driver=kvm2 : (2m4.687528623s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-105315 --memory=3072 --alsologtostderr -v=1 --driver=kvm2 
E0908 12:01:19.555463  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/functional-799296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-105315 --memory=3072 --alsologtostderr -v=1 --driver=kvm2 : (55.992302382s)
helpers_test.go:175: Cleaning up "running-upgrade-105315" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-105315
--- PASS: TestRunningBinaryUpgrade (182.17s)

                                                
                                    
x
+
TestKubernetesUpgrade (254.78s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-687190 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-687190 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2 : (1m45.274051296s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-687190
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-687190: (3.379839382s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-687190 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-687190 status --format={{.Host}}: exit status 7 (80.142178ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-687190 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-687190 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2 : (49.603331026s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-687190 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-687190 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-687190 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2 : exit status 106 (99.199129ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-687190] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21512
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21512-360138/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-360138/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-687190
	    minikube start -p kubernetes-upgrade-687190 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6871902 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.0, by running:
	    
	    minikube start -p kubernetes-upgrade-687190 --kubernetes-version=v1.34.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-687190 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-687190 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2 : (1m35.227929511s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-687190" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-687190
E0908 12:03:57.086729  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/skaffold-756868/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-687190: (1.053622289s)
--- PASS: TestKubernetesUpgrade (254.78s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.66s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.66s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (157.85s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3886614273 start -p stopped-upgrade-282789 --memory=3072 --vm-driver=kvm2 
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3886614273 start -p stopped-upgrade-282789 --memory=3072 --vm-driver=kvm2 : (1m8.943195567s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3886614273 -p stopped-upgrade-282789 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3886614273 -p stopped-upgrade-282789 stop: (13.151851943s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-282789 --memory=3072 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-282789 --memory=3072 --alsologtostderr -v=1 --driver=kvm2 : (1m15.74996063s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (157.85s)

                                                
                                    
x
+
TestPause/serial/Start (117.6s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-211982 --memory=3072 --install-addons=false --wait=all --driver=kvm2 
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-211982 --memory=3072 --install-addons=false --wait=all --driver=kvm2 : (1m57.597927378s)
--- PASS: TestPause/serial/Start (117.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-683671 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2 
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-683671 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2 : exit status 14 (81.075631ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-683671] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21512
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21512-360138/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-360138/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (73.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-683671 --memory=3072 --alsologtostderr -v=5 --driver=kvm2 
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-683671 --memory=3072 --alsologtostderr -v=5 --driver=kvm2 : (1m13.610690732s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-683671 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (73.91s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.3s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-282789
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-282789: (1.300484871s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.30s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (75.44s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-211982 --alsologtostderr -v=1 --driver=kvm2 
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-211982 --alsologtostderr -v=1 --driver=kvm2 : (1m15.414356569s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (75.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (43.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-683671 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2 
E0908 12:04:17.569127  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/skaffold-756868/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-683671 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2 : (42.933436489s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-683671 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-683671 status -o json: exit status 2 (251.439473ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-683671","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-683671
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (43.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (30.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-683671 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2 
E0908 12:04:58.531130  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/skaffold-756868/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-683671 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2 : (30.275092042s)
--- PASS: TestNoKubernetes/serial/Start (30.28s)

                                                
                                    
x
+
TestPause/serial/Pause (0.67s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-211982 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.67s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.28s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-211982 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-211982 --output=json --layout=cluster: exit status 2 (277.142609ms)

                                                
                                                
-- stdout --
	{"Name":"pause-211982","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-211982","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.28s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.64s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-211982 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.64s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.85s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-211982 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.85s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.05s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-211982 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-211982 --alsologtostderr -v=5: (1.050590254s)
--- PASS: TestPause/serial/DeletePaused (1.05s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.49s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-683671 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-683671 "sudo systemctl is-active --quiet service kubelet": exit status 1 (223.617735ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-683671
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-683671: (1.445804615s)
--- PASS: TestNoKubernetes/serial/Stop (1.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (74.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-683671 --driver=kvm2 
E0908 12:05:35.782422  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/gvisor-545908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:05:35.789082  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/gvisor-545908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:05:35.800586  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/gvisor-545908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:05:35.822045  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/gvisor-545908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:05:35.863499  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/gvisor-545908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:05:35.945133  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/gvisor-545908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:05:36.106773  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/gvisor-545908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:05:36.429034  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/gvisor-545908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:05:37.070429  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/gvisor-545908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:05:38.352111  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/gvisor-545908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:05:40.913890  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/gvisor-545908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:05:46.035672  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/gvisor-545908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:05:56.277727  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/gvisor-545908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-683671 --driver=kvm2 : (1m14.914442643s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (74.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (121.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-991311 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
E0908 12:06:16.759871  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/gvisor-545908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:06:19.555948  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/functional-799296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:06:20.453361  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/skaffold-756868/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-991311 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (2m1.98387963s)
--- PASS: TestNetworkPlugins/group/auto/Start (121.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-683671 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-683671 "sudo systemctl is-active --quiet service kubelet": exit status 1 (243.286527ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (105.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-991311 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-991311 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : (1m45.664263493s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (105.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (141.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-991311 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 
E0908 12:06:57.721972  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/gvisor-545908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-991311 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (2m21.410055565s)
--- PASS: TestNetworkPlugins/group/calico/Start (141.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-991311 "pgrep -a kubelet"
I0908 12:08:12.443224  364318 config.go:182] Loaded profile config "auto-991311": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-991311 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-mrn56" [6eebc8a5-1ab7-4dcc-b975-acd7108c2b0e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-mrn56" [6eebc8a5-1ab7-4dcc-b975-acd7108c2b0e] Running
E0908 12:08:19.644003  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/gvisor-545908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004824179s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-991311 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-991311 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-991311 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-rph5q" [1c696b45-6ca9-4d56-aec0-34f1670af12d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004883985s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-991311 "pgrep -a kubelet"
I0908 12:08:30.708514  364318 config.go:182] Loaded profile config "kindnet-991311": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-991311 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-zn6tn" [87deec4e-17b8-40bd-b66a-7f260cf6ef6b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0908 12:08:36.589047  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/skaffold-756868/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-zn6tn" [87deec4e-17b8-40bd-b66a-7f260cf6ef6b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.004854605s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (74.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-991311 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-991311 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (1m14.199103653s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (74.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-991311 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-991311 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-991311 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (121.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-991311 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-991311 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (2m1.710670184s)
--- PASS: TestNetworkPlugins/group/false/Start (121.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (132.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-991311 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-991311 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (2m12.250119427s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (132.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-76jwp" [1214aba6-3a64-451d-b796-fe33a7bcc79d] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-76jwp" [1214aba6-3a64-451d-b796-fe33a7bcc79d] Running
E0908 12:09:04.295107  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/skaffold-756868/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:09:04.602227  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/addons-733032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.007025757s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-991311 "pgrep -a kubelet"
I0908 12:09:08.383242  364318 config.go:182] Loaded profile config "calico-991311": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-991311 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-cphrn" [cea335f8-39f6-4770-8d67-b61040586ba7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-cphrn" [cea335f8-39f6-4770-8d67-b61040586ba7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.013759393s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-991311 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-991311 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-991311 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (107.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-991311 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-991311 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (1m47.358480523s)
--- PASS: TestNetworkPlugins/group/flannel/Start (107.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-991311 "pgrep -a kubelet"
I0908 12:09:56.092750  364318 config.go:182] Loaded profile config "custom-flannel-991311": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-991311 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-fx4hc" [387c931c-7ba3-4409-a93c-2b5a4472e52f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-fx4hc" [387c931c-7ba3-4409-a93c-2b5a4472e52f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.003989818s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-991311 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-991311 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-991311 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (110.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-991311 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 
E0908 12:10:35.783141  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/gvisor-545908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-991311 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : (1m50.502384403s)
--- PASS: TestNetworkPlugins/group/bridge/Start (110.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-991311 "pgrep -a kubelet"
I0908 12:10:51.810164  364318 config.go:182] Loaded profile config "false-991311": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (12.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-991311 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-qgj9z" [0aba198f-74eb-4074-af0a-4b4faae424be] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-qgj9z" [0aba198f-74eb-4074-af0a-4b4faae424be] Running
E0908 12:11:03.485868  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/gvisor-545908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 12.00344028s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (12.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-991311 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-991311 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-991311 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-991311 "pgrep -a kubelet"
I0908 12:11:13.940293  364318 config.go:182] Loaded profile config "enable-default-cni-991311": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-991311 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-zndzq" [d4cfd023-5b28-40b2-8f44-d97c4da849e0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0908 12:11:19.555734  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/functional-799296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-zndzq" [d4cfd023-5b28-40b2-8f44-d97c4da849e0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.00539527s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (103.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-991311 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-991311 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : (1m43.417944758s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (103.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-jpmgv" [c7b4b8ff-4714-4ed4-97ec-cc45f17f91d3] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004115491s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-991311 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-991311 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-991311 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-991311 "pgrep -a kubelet"
I0908 12:11:31.396361  364318 config.go:182] Loaded profile config "flannel-991311": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-991311 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-vtk4b" [5799f6d2-ff3a-4c71-89d4-2b0d17b6cecf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-vtk4b" [5799f6d2-ff3a-4c71-89d4-2b0d17b6cecf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.02405393s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-991311 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-991311 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-991311 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (122.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-690079 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-690079 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.28.0: (2m2.923915055s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (122.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (141.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-288722 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-288722 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.34.0: (2m21.070689208s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (141.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-991311 "pgrep -a kubelet"
I0908 12:12:17.591185  364318 config.go:182] Loaded profile config "bridge-991311": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-991311 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-j8fxj" [06ed67c4-fc5e-41d4-8f4d-a7697eac8ded] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-j8fxj" [06ed67c4-fc5e-41d4-8f4d-a7697eac8ded] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.005029932s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-991311 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-991311 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-991311 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (107.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-295825 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-295825 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.34.0: (1m47.38429134s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (107.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-991311 "pgrep -a kubelet"
I0908 12:13:06.377518  364318 config.go:182] Loaded profile config "kubenet-991311": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (12.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-991311 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-hmf8m" [8cdf4df9-33cf-4abb-be48-0c8c0e140d1f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-hmf8m" [8cdf4df9-33cf-4abb-be48-0c8c0e140d1f] Running
E0908 12:13:12.763174  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/auto-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:13:12.769755  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/auto-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:13:12.781223  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/auto-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:13:12.802770  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/auto-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:13:12.844352  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/auto-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:13:12.925944  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/auto-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:13:13.087561  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/auto-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:13:13.409336  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/auto-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:13:14.051323  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/auto-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:13:15.333220  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/auto-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:13:17.895467  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/auto-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 12.004605029s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (12.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-991311 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-991311 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-991311 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.15s)
E0908 12:16:02.335479  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/false-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:16:02.629527  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/functional-799296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (99.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-059965 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.34.0
E0908 12:13:44.930038  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/kindnet-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-059965 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.34.0: (1m39.052155947s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (99.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-690079 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [cb38023c-f243-40cd-a17e-d9f92bc54b82] Pending
helpers_test.go:352: "busybox" [cb38023c-f243-40cd-a17e-d9f92bc54b82] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [cb38023c-f243-40cd-a17e-d9f92bc54b82] Running
E0908 12:13:53.741585  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/auto-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004675064s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-690079 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-690079 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-690079 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.166123829s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-690079 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-690079 --alsologtostderr -v=3
E0908 12:14:02.136622  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/calico-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:14:02.143097  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/calico-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:14:02.154550  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/calico-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:14:02.176037  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/calico-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:14:02.217511  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/calico-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:14:02.299056  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/calico-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:14:02.461357  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/calico-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:14:02.783426  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/calico-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:14:03.425464  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/calico-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:14:04.602836  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/addons-733032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:14:04.707518  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/calico-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:14:05.411769  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/kindnet-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:14:07.269237  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/calico-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-690079 --alsologtostderr -v=3: (12.634448214s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-690079 -n old-k8s-version-690079
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-690079 -n old-k8s-version-690079: exit status 7 (80.24845ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-690079 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (46.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-690079 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.28.0
E0908 12:14:12.391036  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/calico-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:14:22.632916  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/calico-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-690079 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.28.0: (46.091893032s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-690079 -n old-k8s-version-690079
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (46.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-288722 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [381e2640-8662-41e4-8ab3-5580be81d1b2] Pending
helpers_test.go:352: "busybox" [381e2640-8662-41e4-8ab3-5580be81d1b2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [381e2640-8662-41e4-8ab3-5580be81d1b2] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.00565511s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-288722 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-288722 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-288722 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.05554151s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-288722 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-295825 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [8d06d59d-c1c9-4c16-b310-cdc867ac6119] Pending
helpers_test.go:352: "busybox" [8d06d59d-c1c9-4c16-b310-cdc867ac6119] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [8d06d59d-c1c9-4c16-b310-cdc867ac6119] Running
E0908 12:14:43.115171  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/calico-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004039867s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-295825 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-288722 --alsologtostderr -v=3
E0908 12:14:34.703671  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/auto-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-288722 --alsologtostderr -v=3: (12.488280374s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-295825 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-295825 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-295825 --alsologtostderr -v=3
E0908 12:14:46.373261  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/kindnet-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-295825 --alsologtostderr -v=3: (12.431651638s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-288722 -n no-preload-288722
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-288722 -n no-preload-288722: exit status 7 (86.441906ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-288722 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (55.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-288722 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.34.0
E0908 12:14:56.341386  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/custom-flannel-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:14:56.347851  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/custom-flannel-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:14:56.359364  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/custom-flannel-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:14:56.380879  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/custom-flannel-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:14:56.422383  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/custom-flannel-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:14:56.503924  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/custom-flannel-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:14:56.665654  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/custom-flannel-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:14:56.987700  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/custom-flannel-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-288722 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.34.0: (54.760692204s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-288722 -n no-preload-288722
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (55.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-295825 -n embed-certs-295825
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-295825 -n embed-certs-295825: exit status 7 (122.176093ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-295825 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (65.69s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-295825 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-295825 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.34.0: (1m5.371962413s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-295825 -n embed-certs-295825
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (65.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (14.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-bmb94" [b75e5fa6-4c90-4800-b9fc-5a60ddd182ee] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0908 12:14:57.629522  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/custom-flannel-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:14:58.911589  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/custom-flannel-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:15:01.473512  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/custom-flannel-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-bmb94" [b75e5fa6-4c90-4800-b9fc-5a60ddd182ee] Running
E0908 12:15:06.595109  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/custom-flannel-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.004222816s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (14.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-bmb94" [b75e5fa6-4c90-4800-b9fc-5a60ddd182ee] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00618677s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-690079 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-690079 image list --format=json
E0908 12:15:16.837159  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/custom-flannel-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-690079 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-690079 -n old-k8s-version-690079
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-690079 -n old-k8s-version-690079: exit status 2 (342.63888ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-690079 -n old-k8s-version-690079
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-690079 -n old-k8s-version-690079: exit status 2 (363.935276ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-690079 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p old-k8s-version-690079 --alsologtostderr -v=1: (1.054613989s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-690079 -n old-k8s-version-690079
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-690079 -n old-k8s-version-690079
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-059965 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [75aadd25-3392-4195-8076-264c39b9e8c9] Pending
helpers_test.go:352: "busybox" [75aadd25-3392-4195-8076-264c39b9e8c9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [75aadd25-3392-4195-8076-264c39b9e8c9] Running
E0908 12:15:24.076829  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/calico-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:15:27.678391  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/addons-733032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004935203s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-059965 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (4.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-059965 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-059965 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (4.850402513s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-059965 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (4.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (62.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-357629 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-357629 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.34.0: (1m2.533217399s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (62.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.59s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-059965 --alsologtostderr -v=3
E0908 12:15:35.782373  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/gvisor-545908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:15:37.318737  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/custom-flannel-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-059965 --alsologtostderr -v=3: (12.588825848s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.59s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-gg6vk" [3a86ae36-e8fb-4a59-81c4-e6c302384c1e] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-gg6vk" [3a86ae36-e8fb-4a59-81c4-e6c302384c1e] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.003434915s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-059965 -n default-k8s-diff-port-059965
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-059965 -n default-k8s-diff-port-059965: exit status 7 (99.129661ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-059965 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (63.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-059965 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-059965 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.34.0: (1m3.528998515s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-059965 -n default-k8s-diff-port-059965
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (63.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-gg6vk" [3a86ae36-e8fb-4a59-81c4-e6c302384c1e] Running
E0908 12:15:52.079952  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/false-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:15:52.086558  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/false-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:15:52.098074  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/false-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:15:52.119706  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/false-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:15:52.161548  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/false-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:15:52.243884  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/false-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:15:52.405583  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/false-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:15:52.727252  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/false-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:15:53.368943  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/false-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:15:54.651313  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/false-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005079758s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-288722 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-288722 image list --format=json
E0908 12:15:56.625782  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/auto-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-288722 --alsologtostderr -v=1
E0908 12:15:57.213291  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/false-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-288722 -n no-preload-288722
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-288722 -n no-preload-288722: exit status 2 (313.257527ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-288722 -n no-preload-288722
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-288722 -n no-preload-288722: exit status 2 (300.745751ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-288722 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-288722 -n no-preload-288722
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-288722 -n no-preload-288722
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-22z92" [10675a18-2f6e-4aef-b03b-59f7a4534f2f] Running
E0908 12:16:08.294945  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/kindnet-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.148618782s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-22z92" [10675a18-2f6e-4aef-b03b-59f7a4534f2f] Running
E0908 12:16:12.577470  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/false-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005729572s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-295825 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
E0908 12:16:14.215433  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/enable-default-cni-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:16:14.222061  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/enable-default-cni-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:16:14.233591  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/enable-default-cni-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:16:14.255224  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/enable-default-cni-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-295825 image list --format=json
E0908 12:16:14.297626  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/enable-default-cni-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:16:14.379244  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/enable-default-cni-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.81s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-295825 --alsologtostderr -v=1
E0908 12:16:14.541387  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/enable-default-cni-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:16:14.863727  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/enable-default-cni-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-295825 -n embed-certs-295825
E0908 12:16:15.505917  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/enable-default-cni-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-295825 -n embed-certs-295825: exit status 2 (275.920469ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-295825 -n embed-certs-295825
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-295825 -n embed-certs-295825: exit status 2 (279.958708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-295825 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-295825 -n embed-certs-295825
E0908 12:16:16.788203  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/enable-default-cni-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-295825 -n embed-certs-295825
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.81s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-357629 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0908 12:16:34.713342  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/enable-default-cni-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:16:35.418841  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/flannel-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-357629 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.052630475s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (12.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-357629 --alsologtostderr -v=3
E0908 12:16:45.661021  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/flannel-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:16:45.998413  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/calico-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-357629 --alsologtostderr -v=3: (12.442090048s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (12.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-357629 -n newest-cni-357629
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-357629 -n newest-cni-357629: exit status 7 (80.886541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-357629 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (36.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-357629 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-357629 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.34.0: (35.80698228s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-357629 -n newest-cni-357629
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (36.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-zz2p4" [25ae577f-11a8-47bc-b660-0d81e9435c72] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-zz2p4" [25ae577f-11a8-47bc-b660-0d81e9435c72] Running
E0908 12:16:55.195787  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/enable-default-cni-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.003874443s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (9.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-zz2p4" [25ae577f-11a8-47bc-b660-0d81e9435c72] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005261068s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-059965 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-059965 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.72s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-059965 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-059965 -n default-k8s-diff-port-059965
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-059965 -n default-k8s-diff-port-059965: exit status 2 (261.751565ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-059965 -n default-k8s-diff-port-059965
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-059965 -n default-k8s-diff-port-059965: exit status 2 (258.957347ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-059965 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-059965 -n default-k8s-diff-port-059965
E0908 12:17:06.142583  364318 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-360138/.minikube/profiles/flannel-991311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-059965 -n default-k8s-diff-port-059965
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.72s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-357629 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.65s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-357629 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-357629 -n newest-cni-357629
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-357629 -n newest-cni-357629: exit status 2 (268.044597ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-357629 -n newest-cni-357629
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-357629 -n newest-cni-357629: exit status 2 (268.118515ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-357629 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-357629 -n newest-cni-357629
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-357629 -n newest-cni-357629
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.65s)

                                                
                                    

Test skip (34/345)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.0/cached-images 0
15 TestDownloadOnly/v1.34.0/binaries 0
16 TestDownloadOnly/v1.34.0/kubectl 0
20 TestDownloadOnlyKic 0
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
54 TestDockerEnvContainerd 0
56 TestHyperKitDriverInstallOrUpdate 0
57 TestHyperkitDriverSkipUpgrade 0
109 TestFunctional/parallel/PodmanEnv 0
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
118 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
119 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.02
121 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
122 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
123 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
158 TestFunctionalNewestKubernetes 0
188 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
215 TestKicCustomNetwork 0
216 TestKicExistingNetwork 0
217 TestKicCustomSubnet 0
218 TestKicStaticIP 0
250 TestChangeNoneUser 0
253 TestScheduledStopWindows 0
257 TestInsufficientStorage 0
261 TestMissingContainerUpgrade 0
272 TestNetworkPlugins/group/cilium 4.01
282 TestStartStop/group/disable-driver-mounts 0.24
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-991311 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-991311

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-991311

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-991311

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-991311

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-991311

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-991311

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-991311

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-991311

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-991311

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-991311

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-991311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991311"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-991311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991311"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-991311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991311"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-991311

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-991311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991311"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-991311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991311"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-991311" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-991311" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-991311" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-991311" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-991311" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-991311" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-991311" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-991311" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-991311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991311"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-991311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991311"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-991311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991311"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-991311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991311"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-991311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991311"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-991311

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-991311

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-991311" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-991311" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-991311

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-991311

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-991311" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-991311" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-991311" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-991311" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-991311" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-991311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991311"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-991311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991311"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-991311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991311"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-991311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991311"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-991311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991311"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-991311

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-991311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991311"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-991311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991311"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-991311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991311"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-991311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991311"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-991311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991311"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-991311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991311"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-991311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991311"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-991311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991311"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-991311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991311"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-991311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991311"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-991311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991311"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-991311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991311"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-991311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991311"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-991311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991311"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-991311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991311"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-991311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991311"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-991311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991311"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-991311" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-991311"

                                                
                                                
----------------------- debugLogs end: cilium-991311 [took: 3.844242757s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-991311" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-991311
--- SKIP: TestNetworkPlugins/group/cilium (4.01s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-836293" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-836293
--- SKIP: TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                    
Copied to clipboard