Test Report: Docker_Linux_containerd 22021

                    
                      714686ca7bbd77e34d847e892f53d4af2ede556f:2025-12-02:42609
                    
                

Test fail (9/419)

x
+
TestFunctional/parallel/DashboardCmd (302.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-031973 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-031973 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-031973 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-031973 --alsologtostderr -v=1] stderr:
I1202 15:17:28.905682  449241 out.go:360] Setting OutFile to fd 1 ...
I1202 15:17:28.905795  449241 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:17:28.905803  449241 out.go:374] Setting ErrFile to fd 2...
I1202 15:17:28.905808  449241 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:17:28.906038  449241 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-403182/.minikube/bin
I1202 15:17:28.906781  449241 mustload.go:66] Loading cluster: functional-031973
I1202 15:17:28.907856  449241 config.go:182] Loaded profile config "functional-031973": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1202 15:17:28.908295  449241 cli_runner.go:164] Run: docker container inspect functional-031973 --format={{.State.Status}}
I1202 15:17:28.929445  449241 host.go:66] Checking if "functional-031973" exists ...
I1202 15:17:28.929844  449241 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1202 15:17:29.002190  449241 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-02 15:17:28.99032476 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1202 15:17:29.002355  449241 api_server.go:166] Checking apiserver status ...
I1202 15:17:29.002409  449241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1202 15:17:29.002452  449241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-031973
I1202 15:17:29.027799  449241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/22021-403182/.minikube/machines/functional-031973/id_rsa Username:docker}
I1202 15:17:29.142743  449241 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4930/cgroup
W1202 15:17:29.152907  449241 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4930/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1202 15:17:29.152966  449241 ssh_runner.go:195] Run: ls
I1202 15:17:29.157784  449241 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I1202 15:17:29.163055  449241 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W1202 15:17:29.163128  449241 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1202 15:17:29.163338  449241 config.go:182] Loaded profile config "functional-031973": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1202 15:17:29.163371  449241 addons.go:70] Setting dashboard=true in profile "functional-031973"
I1202 15:17:29.163385  449241 addons.go:239] Setting addon dashboard=true in "functional-031973"
I1202 15:17:29.163432  449241 host.go:66] Checking if "functional-031973" exists ...
I1202 15:17:29.163987  449241 cli_runner.go:164] Run: docker container inspect functional-031973 --format={{.State.Status}}
I1202 15:17:29.191822  449241 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1202 15:17:29.193382  449241 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1202 15:17:29.194639  449241 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1202 15:17:29.194676  449241 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1202 15:17:29.194752  449241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-031973
I1202 15:17:29.216777  449241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/22021-403182/.minikube/machines/functional-031973/id_rsa Username:docker}
I1202 15:17:29.326344  449241 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1202 15:17:29.326369  449241 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1202 15:17:29.340554  449241 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1202 15:17:29.340577  449241 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1202 15:17:29.355338  449241 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1202 15:17:29.355372  449241 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1202 15:17:29.372713  449241 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1202 15:17:29.372743  449241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1202 15:17:29.388437  449241 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
I1202 15:17:29.388466  449241 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1202 15:17:29.402524  449241 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1202 15:17:29.402550  449241 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1202 15:17:29.417744  449241 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1202 15:17:29.417763  449241 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1202 15:17:29.433828  449241 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1202 15:17:29.433857  449241 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1202 15:17:29.450543  449241 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1202 15:17:29.450604  449241 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1202 15:17:29.468021  449241 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1202 15:17:29.981705  449241 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-031973 addons enable metrics-server

                                                
                                                
I1202 15:17:29.983245  449241 addons.go:202] Writing out "functional-031973" config to set dashboard=true...
W1202 15:17:29.983547  449241 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1202 15:17:29.984684  449241 kapi.go:59] client config for functional-031973: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22021-403182/.minikube/profiles/functional-031973/client.crt", KeyFile:"/home/jenkins/minikube-integration/22021-403182/.minikube/profiles/functional-031973/client.key", CAFile:"/home/jenkins/minikube-integration/22021-403182/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2815480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1202 15:17:29.985278  449241 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1202 15:17:29.985297  449241 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1202 15:17:29.985306  449241 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1202 15:17:29.985313  449241 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1202 15:17:29.985321  449241 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1202 15:17:29.992955  449241 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  1e14a1c1-d880-41a8-b1ba-f5ce8d369fac 767 0 2025-12-02 15:17:29 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-12-02 15:17:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.110.153.94,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.110.153.94],IPFamilies:[IPv4],AllocateLoadBalance
rNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1202 15:17:29.993115  449241 out.go:285] * Launching proxy ...
* Launching proxy ...
I1202 15:17:29.993172  449241 dashboard.go:154] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-031973 proxy --port 36195]
I1202 15:17:29.993448  449241 dashboard.go:159] Waiting for kubectl to output host:port ...
I1202 15:17:30.043657  449241 dashboard.go:177] proxy stdout: Starting to serve on 127.0.0.1:36195
W1202 15:17:30.043741  449241 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I1202 15:17:30.052528  449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9d3b8a70-9f40-4114-9e46-124c833da3a0] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:17:30 GMT]] Body:0xc0005d5c00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000208280 TLS:<nil>}
I1202 15:17:30.052614  449241 retry.go:31] will retry after 73.432µs: Temporary Error: unexpected response code: 503
I1202 15:17:30.056224  449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[bfe4631f-7513-48eb-b214-e329834df876] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:17:30 GMT]] Body:0xc0005d5cc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000208780 TLS:<nil>}
I1202 15:17:30.056279  449241 retry.go:31] will retry after 132.568µs: Temporary Error: unexpected response code: 503
I1202 15:17:30.059849  449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[077691fa-1c01-4734-80e1-f3b6f3d29357] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:17:30 GMT]] Body:0xc00072f140 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000208b40 TLS:<nil>}
I1202 15:17:30.059899  449241 retry.go:31] will retry after 212.877µs: Temporary Error: unexpected response code: 503
I1202 15:17:30.063817  449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a2eb0c66-904c-4a9c-ae5b-c09bb35dfac9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:17:30 GMT]] Body:0xc0005d5dc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00042da40 TLS:<nil>}
I1202 15:17:30.063872  449241 retry.go:31] will retry after 373.263µs: Temporary Error: unexpected response code: 503
I1202 15:17:30.067193  449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3bf6b005-dcf4-417d-808b-a658d498b633] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:17:30 GMT]] Body:0xc00072f280 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000208c80 TLS:<nil>}
I1202 15:17:30.067234  449241 retry.go:31] will retry after 515.042µs: Temporary Error: unexpected response code: 503
I1202 15:17:30.070587  449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[35470e3a-eb6e-4559-abc6-814ee2aa5291] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:17:30 GMT]] Body:0xc0005d5ec0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00042db80 TLS:<nil>}
I1202 15:17:30.070645  449241 retry.go:31] will retry after 880.824µs: Temporary Error: unexpected response code: 503
I1202 15:17:30.074024  449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4fc111e3-5ee6-49e2-b603-14367dd553b9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:17:30 GMT]] Body:0xc0008a0040 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000208f00 TLS:<nil>}
I1202 15:17:30.074076  449241 retry.go:31] will retry after 1.647039ms: Temporary Error: unexpected response code: 503
I1202 15:17:30.078447  449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5d35d512-638f-4fa5-96a2-8a915564036f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:17:30 GMT]] Body:0xc00067fd80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00042dcc0 TLS:<nil>}
I1202 15:17:30.078491  449241 retry.go:31] will retry after 1.402606ms: Temporary Error: unexpected response code: 503
I1202 15:17:30.082724  449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[977a8e7d-6149-458c-b5a0-892dc17aa9ac] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:17:30 GMT]] Body:0xc00072f440 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00036a8c0 TLS:<nil>}
I1202 15:17:30.082774  449241 retry.go:31] will retry after 2.494949ms: Temporary Error: unexpected response code: 503
I1202 15:17:30.088700  449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[24e871f5-4af5-4352-bccf-5e2f8b0855c4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:17:30 GMT]] Body:0xc0008a0200 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00042de00 TLS:<nil>}
I1202 15:17:30.088755  449241 retry.go:31] will retry after 4.036144ms: Temporary Error: unexpected response code: 503
I1202 15:17:30.095228  449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a5037d85-70ac-415a-bc7e-61f6312e8b5d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:17:30 GMT]] Body:0xc0017b0000 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002092c0 TLS:<nil>}
I1202 15:17:30.095296  449241 retry.go:31] will retry after 6.549807ms: Temporary Error: unexpected response code: 503
I1202 15:17:30.105642  449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b15402ed-a348-40a2-b75c-3ca7946409c1] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:17:30 GMT]] Body:0xc00072f540 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00036aa00 TLS:<nil>}
I1202 15:17:30.105727  449241 retry.go:31] will retry after 10.104639ms: Temporary Error: unexpected response code: 503
I1202 15:17:30.118947  449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[851da338-8e8d-4d95-9b6b-41eaa2ce13e3] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:17:30 GMT]] Body:0xc0017b0100 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000150000 TLS:<nil>}
I1202 15:17:30.119024  449241 retry.go:31] will retry after 7.003683ms: Temporary Error: unexpected response code: 503
I1202 15:17:30.129751  449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[bd2d07b1-e22e-4b5e-872a-7f287f015a95] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:17:30 GMT]] Body:0xc0008a0a40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00036ab40 TLS:<nil>}
I1202 15:17:30.129846  449241 retry.go:31] will retry after 16.229382ms: Temporary Error: unexpected response code: 503
I1202 15:17:30.150601  449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7edb8c63-233b-4ba4-9623-9cd19e75e229] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:17:30 GMT]] Body:0xc0017b0240 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209400 TLS:<nil>}
I1202 15:17:30.150698  449241 retry.go:31] will retry after 40.097973ms: Temporary Error: unexpected response code: 503
I1202 15:17:30.194791  449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[29f7af1a-9606-4934-ae62-8382e622ecf6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:17:30 GMT]] Body:0xc0017b0300 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00036ac80 TLS:<nil>}
I1202 15:17:30.194897  449241 retry.go:31] will retry after 42.297381ms: Temporary Error: unexpected response code: 503
I1202 15:17:30.240884  449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3521db4d-1515-46ba-a649-b17b7df5be48] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:17:30 GMT]] Body:0xc0017b03c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00036adc0 TLS:<nil>}
I1202 15:17:30.240944  449241 retry.go:31] will retry after 55.235791ms: Temporary Error: unexpected response code: 503
I1202 15:17:30.300512  449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a96b9e96-94d3-4525-af97-06c4b908892f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:17:30 GMT]] Body:0xc00072f780 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00036b180 TLS:<nil>}
I1202 15:17:30.300598  449241 retry.go:31] will retry after 141.233319ms: Temporary Error: unexpected response code: 503
I1202 15:17:30.446136  449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7e0cd879-9e1a-4d4b-8ff2-38cb9ef3cffc] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:17:30 GMT]] Body:0xc0008a0b40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000150140 TLS:<nil>}
I1202 15:17:30.446216  449241 retry.go:31] will retry after 156.215687ms: Temporary Error: unexpected response code: 503
I1202 15:17:30.605753  449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c4abf687-17a8-47ee-aa7a-94f46764eee0] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:17:30 GMT]] Body:0xc0017b0500 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209540 TLS:<nil>}
I1202 15:17:30.605829  449241 retry.go:31] will retry after 190.736858ms: Temporary Error: unexpected response code: 503
I1202 15:17:30.800823  449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4f28896f-d9b2-463b-af54-6f601de2a15a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:17:30 GMT]] Body:0xc0008a0e00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00036b400 TLS:<nil>}
I1202 15:17:30.800894  449241 retry.go:31] will retry after 437.958501ms: Temporary Error: unexpected response code: 503
I1202 15:17:31.242459  449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[95c47543-9b80-4005-8a92-8f45f94db75c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:17:31 GMT]] Body:0xc0017b0600 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209680 TLS:<nil>}
I1202 15:17:31.242526  449241 retry.go:31] will retry after 375.150197ms: Temporary Error: unexpected response code: 503
I1202 15:17:31.621145  449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ddc65722-9648-458a-b8e6-69d41c8fc8d0] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:17:31 GMT]] Body:0xc0008a1000 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00036bb80 TLS:<nil>}
I1202 15:17:31.621227  449241 retry.go:31] will retry after 704.299178ms: Temporary Error: unexpected response code: 503
I1202 15:17:32.329315  449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8e96001b-c447-493c-bb60-996aa05ae47e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:17:32 GMT]] Body:0xc00072f900 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209900 TLS:<nil>}
I1202 15:17:32.329380  449241 retry.go:31] will retry after 1.523645226s: Temporary Error: unexpected response code: 503
I1202 15:17:33.857380  449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ff4461f9-c04b-4cec-9452-abcd6a0a47ec] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:17:33 GMT]] Body:0xc0008a1080 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000150280 TLS:<nil>}
I1202 15:17:33.857454  449241 retry.go:31] will retry after 1.144679699s: Temporary Error: unexpected response code: 503
I1202 15:17:35.006103  449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8bf1a333-13cb-451e-a605-b0f273790e1b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:17:35 GMT]] Body:0xc0017b06c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001503c0 TLS:<nil>}
I1202 15:17:35.006176  449241 retry.go:31] will retry after 1.557833298s: Temporary Error: unexpected response code: 503
I1202 15:17:36.568061  449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f6c6e525-0f8d-4731-bbeb-bf447044a290] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:17:36 GMT]] Body:0xc0008a1100 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00036bcc0 TLS:<nil>}
I1202 15:17:36.568145  449241 retry.go:31] will retry after 4.329490129s: Temporary Error: unexpected response code: 503
I1202 15:17:40.901530  449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[02125efd-a6c8-40d2-b419-de0c4e77b110] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:17:40 GMT]] Body:0xc0008a1180 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000150500 TLS:<nil>}
I1202 15:17:40.901609  449241 retry.go:31] will retry after 6.789008513s: Temporary Error: unexpected response code: 503
I1202 15:17:47.697496  449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[40710c54-b3b3-443f-9156-75b8f23f1cbe] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:17:47 GMT]] Body:0xc0008a1240 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000150780 TLS:<nil>}
I1202 15:17:47.697560  449241 retry.go:31] will retry after 9.579459528s: Temporary Error: unexpected response code: 503
I1202 15:17:57.282494  449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[695f5e70-92c9-465a-96c1-8d01602de36e] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:17:57 GMT]] Body:0xc000882800 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00036be00 TLS:<nil>}
I1202 15:17:57.282569  449241 retry.go:31] will retry after 8.412852722s: Temporary Error: unexpected response code: 503
I1202 15:18:05.700298  449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a8f54c04-4069-4f7f-ae3d-f96cf64a8ff1] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:18:05 GMT]] Body:0xc0017b07c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209a40 TLS:<nil>}
I1202 15:18:05.700380  449241 retry.go:31] will retry after 23.415372389s: Temporary Error: unexpected response code: 503
I1202 15:18:29.119631  449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9540bc8b-8f77-4b22-8073-25ef8d221927] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:18:29 GMT]] Body:0xc000882880 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d6000 TLS:<nil>}
I1202 15:18:29.119725  449241 retry.go:31] will retry after 34.9920945s: Temporary Error: unexpected response code: 503
I1202 15:19:04.116163  449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d6643a1e-18cc-466e-9c45-8a8d0c9ef0da] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:19:04 GMT]] Body:0xc00072fc40 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001508c0 TLS:<nil>}
I1202 15:19:04.116247  449241 retry.go:31] will retry after 57.738560163s: Temporary Error: unexpected response code: 503
I1202 15:20:01.859443  449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ba3a0010-d9cf-4802-be3a-1f07dccf92e0] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:20:01 GMT]] Body:0xc0008820c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d6140 TLS:<nil>}
I1202 15:20:01.859534  449241 retry.go:31] will retry after 1m10.032263161s: Temporary Error: unexpected response code: 503
I1202 15:21:11.895658  449241 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a07a97d2-8c9a-4508-9b03-02a0f14781c3] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:21:11 GMT]] Body:0xc000882140 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000150a00 TLS:<nil>}
I1202 15:21:11.895752  449241 retry.go:31] will retry after 1m24.660979926s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-031973
helpers_test.go:243: (dbg) docker inspect functional-031973:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8e6415af0faf450563a814c4320c99a3fdffa7cb0ee3328d6db07a2fba5353e3",
	        "Created": "2025-12-02T15:15:37.382465049Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 437199,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T15:15:37.417630105Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/8e6415af0faf450563a814c4320c99a3fdffa7cb0ee3328d6db07a2fba5353e3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8e6415af0faf450563a814c4320c99a3fdffa7cb0ee3328d6db07a2fba5353e3/hostname",
	        "HostsPath": "/var/lib/docker/containers/8e6415af0faf450563a814c4320c99a3fdffa7cb0ee3328d6db07a2fba5353e3/hosts",
	        "LogPath": "/var/lib/docker/containers/8e6415af0faf450563a814c4320c99a3fdffa7cb0ee3328d6db07a2fba5353e3/8e6415af0faf450563a814c4320c99a3fdffa7cb0ee3328d6db07a2fba5353e3-json.log",
	        "Name": "/functional-031973",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-031973:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-031973",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8e6415af0faf450563a814c4320c99a3fdffa7cb0ee3328d6db07a2fba5353e3",
	                "LowerDir": "/var/lib/docker/overlay2/ff8e501cc39f97b2264b9620db8c3575efd7e10f0796e3fc558490e7b693b56b-init/diff:/var/lib/docker/overlay2/b24a03799b584404f04c044a7327612eb3ab66b1330d1bf57134456e5f41230d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ff8e501cc39f97b2264b9620db8c3575efd7e10f0796e3fc558490e7b693b56b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ff8e501cc39f97b2264b9620db8c3575efd7e10f0796e3fc558490e7b693b56b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ff8e501cc39f97b2264b9620db8c3575efd7e10f0796e3fc558490e7b693b56b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-031973",
	                "Source": "/var/lib/docker/volumes/functional-031973/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-031973",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-031973",
	                "name.minikube.sigs.k8s.io": "functional-031973",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "5c60273079da1f9d4e348ddcae81f0a2346ec733b5680c77eb71ba260385fd94",
	            "SandboxKey": "/var/run/docker/netns/5c60273079da",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33165"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33166"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33169"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33167"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33168"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-031973": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "072e297832857b662108017b58f1caabb1f529b2dbb839e022eeb4c01cc96da4",
	                    "EndpointID": "60b5bde8cb58337b502aeac0f46839fc2f8c145ed5188498e6f8715b9c69a2f9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "92:85:11:0a:bb:d6",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-031973",
	                        "8e6415af0faf"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-031973 -n functional-031973
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-031973 logs -n 25: (1.325334381s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                              ARGS                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-031973 image rm kicbase/echo-server:functional-031973 --alsologtostderr                                              │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ image          │ functional-031973 image ls                                                                                                      │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ ssh            │ functional-031973 ssh findmnt -T /mount1                                                                                        │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ image          │ functional-031973 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ ssh            │ functional-031973 ssh findmnt -T /mount2                                                                                        │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ image          │ functional-031973 image ls                                                                                                      │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ ssh            │ functional-031973 ssh findmnt -T /mount3                                                                                        │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ image          │ functional-031973 image save --daemon kicbase/echo-server:functional-031973 --alsologtostderr                                   │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ mount          │ -p functional-031973 --kill=true                                                                                                │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │                     │
	│ cp             │ functional-031973 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                              │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ ssh            │ functional-031973 ssh -n functional-031973 sudo cat /home/docker/cp-test.txt                                                    │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ cp             │ functional-031973 cp functional-031973:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2576747610/001/cp-test.txt      │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ ssh            │ functional-031973 ssh -n functional-031973 sudo cat /home/docker/cp-test.txt                                                    │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ cp             │ functional-031973 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                       │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ ssh            │ functional-031973 ssh -n functional-031973 sudo cat /tmp/does/not/exist/cp-test.txt                                             │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ update-context │ functional-031973 update-context --alsologtostderr -v=2                                                                         │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ update-context │ functional-031973 update-context --alsologtostderr -v=2                                                                         │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ update-context │ functional-031973 update-context --alsologtostderr -v=2                                                                         │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ image          │ functional-031973 image ls --format short --alsologtostderr                                                                     │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ image          │ functional-031973 image ls --format yaml --alsologtostderr                                                                      │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ ssh            │ functional-031973 ssh pgrep buildkitd                                                                                           │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │                     │
	│ image          │ functional-031973 image build -t localhost/my-image:functional-031973 testdata/build --alsologtostderr                          │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ image          │ functional-031973 image ls                                                                                                      │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ image          │ functional-031973 image ls --format json --alsologtostderr                                                                      │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ image          │ functional-031973 image ls --format table --alsologtostderr                                                                     │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 15:17:28
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 15:17:28.857802  449198 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:17:28.858147  449198 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:17:28.858159  449198 out.go:374] Setting ErrFile to fd 2...
	I1202 15:17:28.858167  449198 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:17:28.858525  449198 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-403182/.minikube/bin
	I1202 15:17:28.859129  449198 out.go:368] Setting JSON to false
	I1202 15:17:28.860514  449198 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":7191,"bootTime":1764681458,"procs":245,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 15:17:28.860594  449198 start.go:143] virtualization: kvm guest
	I1202 15:17:28.862751  449198 out.go:179] * [functional-031973] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 15:17:28.864828  449198 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 15:17:28.864854  449198 notify.go:221] Checking for updates...
	I1202 15:17:28.867583  449198 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 15:17:28.868765  449198 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-403182/kubeconfig
	I1202 15:17:28.870531  449198 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-403182/.minikube
	I1202 15:17:28.871999  449198 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 15:17:28.873798  449198 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 15:17:28.875560  449198 config.go:182] Loaded profile config "functional-031973": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1202 15:17:28.876221  449198 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 15:17:28.903402  449198 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 15:17:28.903623  449198 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 15:17:28.974207  449198 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-02 15:17:28.961998728 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 15:17:28.974366  449198 docker.go:319] overlay module found
	I1202 15:17:28.977298  449198 out.go:179] * Using the docker driver based on existing profile
	I1202 15:17:28.978780  449198 start.go:309] selected driver: docker
	I1202 15:17:28.978801  449198 start.go:927] validating driver "docker" against &{Name:functional-031973 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-031973 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 15:17:28.978924  449198 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 15:17:28.979041  449198 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 15:17:29.050244  449198 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-02 15:17:29.038869576 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 15:17:29.050908  449198 cni.go:84] Creating CNI manager for ""
	I1202 15:17:29.051007  449198 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 15:17:29.051074  449198 start.go:353] cluster config:
	{Name:functional-031973 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-031973 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 15:17:29.054245  449198 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	ad7eaef8b35d6       56cc512116c8f       4 minutes ago       Exited              mount-munger              0                   46052ac8f3adc       busybox-mount                               default
	4ab64fdb4167c       d4918ca78576a       5 minutes ago       Running             nginx                     0                   9638d534f9cc1       nginx-svc                                   default
	ea3aa607d0865       9056ab77afb8e       5 minutes ago       Running             echo-server               0                   38683d7f68a4a       hello-node-75c85bcc94-8dm24                 default
	ae1aa2afadc73       9056ab77afb8e       5 minutes ago       Running             echo-server               0                   7454f7d21eb41       hello-node-connect-7d85dfc575-hncff         default
	c337850ffec5c       01e8bacf0f500       5 minutes ago       Running             kube-controller-manager   2                   de46fe6f6caf9       kube-controller-manager-functional-031973   kube-system
	cadd1246401e2       a5f569d49a979       5 minutes ago       Running             kube-apiserver            0                   2adea7d3feb75       kube-apiserver-functional-031973            kube-system
	646c1b88c2291       a3e246e9556e9       5 minutes ago       Running             etcd                      1                   03c0f3cc2c9a0       etcd-functional-031973                      kube-system
	569b0e142e127       8aa150647e88a       5 minutes ago       Running             kube-proxy                1                   bbc9d3c7d3116       kube-proxy-zpxn7                            kube-system
	06c1524f89b8b       01e8bacf0f500       5 minutes ago       Exited              kube-controller-manager   1                   de46fe6f6caf9       kube-controller-manager-functional-031973   kube-system
	c8fa8295f09d0       88320b5498ff2       5 minutes ago       Running             kube-scheduler            1                   f9c0c5bed4df1       kube-scheduler-functional-031973            kube-system
	b48e38bef4b45       52546a367cc9e       5 minutes ago       Running             coredns                   1                   006f7e1e9e593       coredns-66bc5c9577-b94tb                    kube-system
	19f21b5e8580d       409467f978b4a       5 minutes ago       Running             kindnet-cni               1                   b914ab31d57c5       kindnet-z4gbw                               kube-system
	ef1e68dc2307c       6e38f40d628db       5 minutes ago       Running             storage-provisioner       1                   48df44372fd3e       storage-provisioner                         kube-system
	85c6d8bf722dc       6e38f40d628db       6 minutes ago       Exited              storage-provisioner       0                   48df44372fd3e       storage-provisioner                         kube-system
	84288ecd238d1       52546a367cc9e       6 minutes ago       Exited              coredns                   0                   006f7e1e9e593       coredns-66bc5c9577-b94tb                    kube-system
	8db4008180d89       409467f978b4a       6 minutes ago       Exited              kindnet-cni               0                   b914ab31d57c5       kindnet-z4gbw                               kube-system
	c2d18e41fc203       8aa150647e88a       6 minutes ago       Exited              kube-proxy                0                   bbc9d3c7d3116       kube-proxy-zpxn7                            kube-system
	3850d6885c4a8       88320b5498ff2       6 minutes ago       Exited              kube-scheduler            0                   f9c0c5bed4df1       kube-scheduler-functional-031973            kube-system
	7a94f88d6c9f0       a3e246e9556e9       6 minutes ago       Exited              etcd                      0                   03c0f3cc2c9a0       etcd-functional-031973                      kube-system
	
	
	==> containerd <==
	Dec 02 15:21:48 functional-031973 containerd[3792]: time="2025-12-02T15:21:48.548504730Z" level=info msg="container event discarded" container=cadd1246401e2709608c00abb7ba9788bcb8e60e9c15eb367d9009429020ba0f type=CONTAINER_STARTED_EVENT
	Dec 02 15:21:48 functional-031973 containerd[3792]: time="2025-12-02T15:21:48.982591006Z" level=info msg="container event discarded" container=304e1d2b3e8d28ea0e5ecd99c9224c619a48785c8225a8b961bb0e38fcf94d5b type=CONTAINER_DELETED_EVENT
	Dec 02 15:21:48 functional-031973 containerd[3792]: time="2025-12-02T15:21:48.982645089Z" level=info msg="container event discarded" container=c337850ffec5cbfad547c07320b3343ad60dcf859bf98a12c95ac2636f334b66 type=CONTAINER_CREATED_EVENT
	Dec 02 15:21:49 functional-031973 containerd[3792]: time="2025-12-02T15:21:49.065170181Z" level=info msg="container event discarded" container=c337850ffec5cbfad547c07320b3343ad60dcf859bf98a12c95ac2636f334b66 type=CONTAINER_STARTED_EVENT
	Dec 02 15:21:51 functional-031973 containerd[3792]: time="2025-12-02T15:21:51.998743959Z" level=info msg="container event discarded" container=325c273b19ce3626ee1377f7cbb1bb57de4b739c3413425a92dcdd79c186257e type=CONTAINER_STOPPED_EVENT
	Dec 02 15:21:52 functional-031973 containerd[3792]: time="2025-12-02T15:21:52.998040292Z" level=info msg="container event discarded" container=abb4b063ffdd986a74f77852bf703a58889b7d9f6a366dd829048fe7a66fc7a9 type=CONTAINER_DELETED_EVENT
	Dec 02 15:22:10 functional-031973 containerd[3792]: time="2025-12-02T15:22:10.960374755Z" level=info msg="container event discarded" container=9cfeb97bab0c34c1f07b51fcaa1b2f35df812739d282befec416a6728b6af663 type=CONTAINER_CREATED_EVENT
	Dec 02 15:22:10 functional-031973 containerd[3792]: time="2025-12-02T15:22:10.960479369Z" level=info msg="container event discarded" container=9cfeb97bab0c34c1f07b51fcaa1b2f35df812739d282befec416a6728b6af663 type=CONTAINER_STARTED_EVENT
	Dec 02 15:22:14 functional-031973 containerd[3792]: time="2025-12-02T15:22:14.041094242Z" level=info msg="container event discarded" container=9cfeb97bab0c34c1f07b51fcaa1b2f35df812739d282befec416a6728b6af663 type=CONTAINER_STOPPED_EVENT
	Dec 02 15:22:16 functional-031973 containerd[3792]: time="2025-12-02T15:22:16.339796327Z" level=info msg="container event discarded" container=7454f7d21eb4173a2fcc9281a2434b271f88a3f5b16b38bc8296fb6747262ac2 type=CONTAINER_CREATED_EVENT
	Dec 02 15:22:16 functional-031973 containerd[3792]: time="2025-12-02T15:22:16.339875120Z" level=info msg="container event discarded" container=7454f7d21eb4173a2fcc9281a2434b271f88a3f5b16b38bc8296fb6747262ac2 type=CONTAINER_STARTED_EVENT
	Dec 02 15:22:16 functional-031973 containerd[3792]: time="2025-12-02T15:22:16.507363218Z" level=info msg="container event discarded" container=38683d7f68a4a44a0cf059979a63a7f26ec26d635d0531803f0d1852528aef05 type=CONTAINER_CREATED_EVENT
	Dec 02 15:22:16 functional-031973 containerd[3792]: time="2025-12-02T15:22:16.507443403Z" level=info msg="container event discarded" container=38683d7f68a4a44a0cf059979a63a7f26ec26d635d0531803f0d1852528aef05 type=CONTAINER_STARTED_EVENT
	Dec 02 15:22:16 functional-031973 containerd[3792]: time="2025-12-02T15:22:16.625764091Z" level=info msg="container event discarded" container=9638d534f9cc11c7b3553138db86ae152f51bf793fd8c6690fece8feaf32276e type=CONTAINER_CREATED_EVENT
	Dec 02 15:22:16 functional-031973 containerd[3792]: time="2025-12-02T15:22:16.625837881Z" level=info msg="container event discarded" container=9638d534f9cc11c7b3553138db86ae152f51bf793fd8c6690fece8feaf32276e type=CONTAINER_STARTED_EVENT
	Dec 02 15:22:18 functional-031973 containerd[3792]: time="2025-12-02T15:22:18.185468137Z" level=info msg="container event discarded" container=ae1aa2afadc7333251d88624ce4c05c6201cf7820e7b6f240b0f8f750b5dd3d4 type=CONTAINER_CREATED_EVENT
	Dec 02 15:22:18 functional-031973 containerd[3792]: time="2025-12-02T15:22:18.230259490Z" level=info msg="container event discarded" container=ae1aa2afadc7333251d88624ce4c05c6201cf7820e7b6f240b0f8f750b5dd3d4 type=CONTAINER_STARTED_EVENT
	Dec 02 15:22:18 functional-031973 containerd[3792]: time="2025-12-02T15:22:18.795069032Z" level=info msg="container event discarded" container=ea3aa607d0865f5186d05378998d4e3ba27baa0ba6dc06509421ece22a3f8a34 type=CONTAINER_CREATED_EVENT
	Dec 02 15:22:18 functional-031973 containerd[3792]: time="2025-12-02T15:22:18.844444763Z" level=info msg="container event discarded" container=ea3aa607d0865f5186d05378998d4e3ba27baa0ba6dc06509421ece22a3f8a34 type=CONTAINER_STARTED_EVENT
	Dec 02 15:22:21 functional-031973 containerd[3792]: time="2025-12-02T15:22:21.867316613Z" level=info msg="container event discarded" container=4ab64fdb4167c83743b157f78cfa339cf34c3f3c7d8a65af7efa8add4db4edc3 type=CONTAINER_CREATED_EVENT
	Dec 02 15:22:21 functional-031973 containerd[3792]: time="2025-12-02T15:22:21.922392722Z" level=info msg="container event discarded" container=4ab64fdb4167c83743b157f78cfa339cf34c3f3c7d8a65af7efa8add4db4edc3 type=CONTAINER_STARTED_EVENT
	Dec 02 15:22:22 functional-031973 containerd[3792]: time="2025-12-02T15:22:22.448013865Z" level=info msg="container event discarded" container=c824e559fd6d7eb233460e9bee97a66eba020ffc704242346ca4aa69bf60b56d type=CONTAINER_CREATED_EVENT
	Dec 02 15:22:22 functional-031973 containerd[3792]: time="2025-12-02T15:22:22.448071031Z" level=info msg="container event discarded" container=c824e559fd6d7eb233460e9bee97a66eba020ffc704242346ca4aa69bf60b56d type=CONTAINER_STARTED_EVENT
	Dec 02 15:22:29 functional-031973 containerd[3792]: time="2025-12-02T15:22:29.902576613Z" level=info msg="container event discarded" container=46052ac8f3adcd68ecfaf90d766201ffb3debad7351ccda45734ca06392e5c13 type=CONTAINER_CREATED_EVENT
	Dec 02 15:22:29 functional-031973 containerd[3792]: time="2025-12-02T15:22:29.902816399Z" level=info msg="container event discarded" container=46052ac8f3adcd68ecfaf90d766201ffb3debad7351ccda45734ca06392e5c13 type=CONTAINER_STARTED_EVENT
	
	
	==> coredns [84288ecd238d1ae9a22d0f967cce2f858ff120a649bf4bb1ed143ac2e88eae81] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46439 - 44210 "HINFO IN 7161019882344419339.5475944101483733167. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.0705958s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b48e38bef4b45aff4e9fe11ebb9238a1fa36a1eb7ac89a19b04a3c28f80f0997] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49535 - 17750 "HINFO IN 3721344718356208668.6506403759620066385. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.026959352s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               functional-031973
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-031973
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689
	                    minikube.k8s.io/name=functional-031973
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T15_15_54_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 15:15:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-031973
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 15:22:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 15:20:55 +0000   Tue, 02 Dec 2025 15:15:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 15:20:55 +0000   Tue, 02 Dec 2025 15:15:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 15:20:55 +0000   Tue, 02 Dec 2025 15:15:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 15:20:55 +0000   Tue, 02 Dec 2025 15:16:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-031973
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                120a04eb-b735-4dd2-a8e6-bf3b871cface
	  Boot ID:                    54b7568c-9bf9-47f9-8d68-e36a3a33af00
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-8dm24                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m14s
	  default                     hello-node-connect-7d85dfc575-hncff           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m15s
	  default                     mysql-5bb876957f-ljrh9                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     4m51s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m14s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m8s
	  kube-system                 coredns-66bc5c9577-b94tb                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     6m31s
	  kube-system                 etcd-functional-031973                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         6m37s
	  kube-system                 kindnet-z4gbw                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      6m33s
	  kube-system                 kube-apiserver-functional-031973              250m (3%)     0 (0%)      0 (0%)           0 (0%)         5m40s
	  kube-system                 kube-controller-manager-functional-031973     200m (2%)     0 (0%)      0 (0%)           0 (0%)         6m37s
	  kube-system                 kube-proxy-zpxn7                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m33s
	  kube-system                 kube-scheduler-functional-031973              100m (1%)     0 (0%)      0 (0%)           0 (0%)         6m37s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m31s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-wk9xg    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-b6pzr         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m30s                  kube-proxy       
	  Normal  Starting                 5m31s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m42s (x8 over 6m42s)  kubelet          Node functional-031973 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m42s (x8 over 6m42s)  kubelet          Node functional-031973 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m42s (x7 over 6m42s)  kubelet          Node functional-031973 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m37s                  kubelet          Node functional-031973 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  6m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    6m37s                  kubelet          Node functional-031973 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m37s                  kubelet          Node functional-031973 status is now: NodeHasSufficientPID
	  Normal  Starting                 6m37s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           6m33s                  node-controller  Node functional-031973 event: Registered Node functional-031973 in Controller
	  Normal  NodeReady                6m20s                  kubelet          Node functional-031973 status is now: NodeReady
	  Normal  Starting                 5m43s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m42s (x8 over 5m43s)  kubelet          Node functional-031973 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m42s (x8 over 5m43s)  kubelet          Node functional-031973 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m42s (x7 over 5m43s)  kubelet          Node functional-031973 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m37s                  node-controller  Node functional-031973 event: Registered Node functional-031973 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff c2 f5 ca ac 67 17 08 06
	[ +13.571564] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 56 96 e2 dd 40 21 08 06
	[  +0.000361] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff c2 f5 ca ac 67 17 08 06
	[  +2.699615] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 7e 77 c0 d8 ea 13 08 06
	[Dec 2 14:52] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 06 3c f9 c8 55 0b 08 06
	[  +0.118748] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 02 d8 9f 3f ef 99 08 06
	[  +0.856727] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b2 fb 9f 63 58 4b 08 06
	[ +14.974602] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff aa c3 c5 ff a1 a9 08 06
	[  +0.000340] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 7e 77 c0 d8 ea 13 08 06
	[  +2.666742] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 5e 20 e4 1d 98 08 06
	[  +0.000370] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 fb 9f 63 58 4b 08 06
	[ +24.223711] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 09 24 19 b9 42 08 06
	[  +0.000349] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 02 d8 9f 3f ef 99 08 06
	
	
	==> etcd [646c1b88c2291cb75cfbfa0d6acbe8c8f6efeb9548850bda8083a0da895f1895] <==
	{"level":"warn","ts":"2025-12-02T15:16:49.368918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.376172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.384407Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.406190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.412902Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.419629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.426243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.432784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.439580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.448532Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.458893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.466153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.473143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.480092Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.486754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.494019Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.503161Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.511195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.518025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.534149Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.540735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.558898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.565791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.573609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.627411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55162","server-name":"","error":"EOF"}
	
	
	==> etcd [7a94f88d6c9f001c713f28d38be30c8b80117dc154363dfccf439f82d547fabb] <==
	{"level":"warn","ts":"2025-12-02T15:15:50.393142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:15:50.400004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:15:50.407216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:15:50.429152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:15:50.436768Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:15:50.448340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:15:50.489896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50438","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-02T15:16:45.855107Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-02T15:16:45.855196Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-031973","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-12-02T15:16:45.855325Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-02T15:16:45.857011Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-02T15:16:45.858427Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-02T15:16:45.858500Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-12-02T15:16:45.858534Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-02T15:16:45.858519Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-02T15:16:45.858525Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-12-02T15:16:45.858554Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-02T15:16:45.858567Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-02T15:16:45.858575Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-02T15:16:45.858583Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-12-02T15:16:45.858586Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-02T15:16:45.860615Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-12-02T15:16:45.860709Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-02T15:16:45.860741Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-12-02T15:16:45.860752Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-031973","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 15:22:30 up  2:04,  0 user,  load average: 0.10, 0.52, 0.83
	Linux functional-031973 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [19f21b5e8580d8f28a81006ac30e2cb2f04cbd5dcb33e97d6895451934417eeb] <==
	I1202 15:20:26.940889       1 main.go:301] handling current node
	I1202 15:20:36.944735       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:20:36.944772       1 main.go:301] handling current node
	I1202 15:20:46.941319       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:20:46.941361       1 main.go:301] handling current node
	I1202 15:20:56.941003       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:20:56.941052       1 main.go:301] handling current node
	I1202 15:21:06.941269       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:21:06.941309       1 main.go:301] handling current node
	I1202 15:21:16.941852       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:21:16.941891       1 main.go:301] handling current node
	I1202 15:21:26.941322       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:21:26.941380       1 main.go:301] handling current node
	I1202 15:21:36.940708       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:21:36.940755       1 main.go:301] handling current node
	I1202 15:21:46.946787       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:21:46.946828       1 main.go:301] handling current node
	I1202 15:21:56.941427       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:21:56.941463       1 main.go:301] handling current node
	I1202 15:22:06.949323       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:22:06.949362       1 main.go:301] handling current node
	I1202 15:22:16.944491       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:22:16.944530       1 main.go:301] handling current node
	I1202 15:22:26.940723       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:22:26.940762       1 main.go:301] handling current node
	
	
	==> kindnet [8db4008180d89c313b691c3ffc28ed67067eecede802fad652ac37fd6fd36acd] <==
	I1202 15:15:59.740136       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1202 15:15:59.740462       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1202 15:15:59.740638       1 main.go:148] setting mtu 1500 for CNI 
	I1202 15:15:59.740656       1 main.go:178] kindnetd IP family: "ipv4"
	I1202 15:15:59.740718       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-02T15:15:59Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1202 15:15:59.942159       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1202 15:15:59.942193       1 controller.go:381] "Waiting for informer caches to sync"
	I1202 15:15:59.942205       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1202 15:15:59.942356       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1202 15:16:00.448425       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1202 15:16:00.448465       1 metrics.go:72] Registering metrics
	I1202 15:16:00.448562       1 controller.go:711] "Syncing nftables rules"
	I1202 15:16:09.943353       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:16:09.943468       1 main.go:301] handling current node
	I1202 15:16:19.947849       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:16:19.947892       1 main.go:301] handling current node
	I1202 15:16:29.945800       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:16:29.945842       1 main.go:301] handling current node
	
	
	==> kube-apiserver [cadd1246401e2709608c00abb7ba9788bcb8e60e9c15eb367d9009429020ba0f] <==
	I1202 15:16:50.088066       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1202 15:16:50.088162       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1202 15:16:50.088208       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1202 15:16:50.092638       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1202 15:16:50.097887       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1202 15:16:50.111508       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 15:16:50.123577       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1202 15:16:50.127784       1 controller.go:667] quota admission added evaluator for: endpoints
	I1202 15:16:50.926982       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1202 15:16:50.991403       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1202 15:16:51.297655       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1202 15:16:51.303790       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1202 15:16:51.778956       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1202 15:16:51.875708       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1202 15:16:51.933585       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1202 15:16:51.941645       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1202 15:16:53.753031       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1202 15:17:10.530696       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.106.2.222"}
	I1202 15:17:15.965948       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.99.119.76"}
	I1202 15:17:16.154741       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.104.98.205"}
	I1202 15:17:16.202184       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.109.87.195"}
	I1202 15:17:29.825060       1 controller.go:667] quota admission added evaluator for: namespaces
	I1202 15:17:29.960687       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.153.94"}
	I1202 15:17:29.973650       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.6.186"}
	I1202 15:17:39.107718       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.100.191.136"}
	
	
	==> kube-controller-manager [06c1524f89b8b1a6e7711d8cea9dec8e489ce09bfdb7e9eeadd318646ca74233] <==
	I1202 15:16:37.392510       1 serving.go:386] Generated self-signed cert in-memory
	I1202 15:16:38.150126       1 controllermanager.go:191] "Starting" version="v1.34.2"
	I1202 15:16:38.150155       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 15:16:38.151661       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1202 15:16:38.151685       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1202 15:16:38.152002       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1202 15:16:38.152052       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1202 15:16:48.154086       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-controller-manager [c337850ffec5cbfad547c07320b3343ad60dcf859bf98a12c95ac2636f334b66] <==
	I1202 15:16:53.453783       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1202 15:16:53.453829       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1202 15:16:53.453856       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1202 15:16:53.453874       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1202 15:16:53.453881       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1202 15:16:53.453920       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1202 15:16:53.454445       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1202 15:16:53.455458       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1202 15:16:53.455556       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1202 15:16:53.458938       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1202 15:16:53.461334       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 15:16:53.461352       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1202 15:16:53.461363       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1202 15:16:53.461628       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1202 15:16:53.463921       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1202 15:16:53.464062       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1202 15:16:53.468396       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1202 15:16:53.470743       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1202 15:17:29.890175       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:17:29.895423       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:17:29.901866       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:17:29.902541       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:17:29.908349       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:17:29.912406       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:17:29.913104       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [569b0e142e12766db902223ca7eb146be3849a69f3c33df418b36923d82a585a] <==
	I1202 15:16:36.597068       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1202 15:16:36.598151       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-031973&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1202 15:16:37.841108       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-031973&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1202 15:16:40.338703       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-031973&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1202 15:16:45.857370       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-031973&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1202 15:16:58.598080       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1202 15:16:58.598118       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1202 15:16:58.598231       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 15:16:58.621795       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 15:16:58.621848       1 server_linux.go:132] "Using iptables Proxier"
	I1202 15:16:58.627324       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 15:16:58.627586       1 server.go:527] "Version info" version="v1.34.2"
	I1202 15:16:58.627599       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 15:16:58.628806       1 config.go:200] "Starting service config controller"
	I1202 15:16:58.628830       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 15:16:58.628871       1 config.go:106] "Starting endpoint slice config controller"
	I1202 15:16:58.628889       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 15:16:58.628923       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 15:16:58.628876       1 config.go:309] "Starting node config controller"
	I1202 15:16:58.628957       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 15:16:58.628966       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 15:16:58.628972       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 15:16:58.729789       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1202 15:16:58.729915       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 15:16:58.729953       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [c2d18e41fc203eee96d6a09dfee77221ae299daef844af8d7758972f0d5eebd6] <==
	I1202 15:15:59.236263       1 server_linux.go:53] "Using iptables proxy"
	I1202 15:15:59.310604       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1202 15:15:59.411054       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1202 15:15:59.411094       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1202 15:15:59.411211       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 15:15:59.458243       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 15:15:59.458294       1 server_linux.go:132] "Using iptables Proxier"
	I1202 15:15:59.464059       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 15:15:59.464551       1 server.go:527] "Version info" version="v1.34.2"
	I1202 15:15:59.464588       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 15:15:59.466130       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 15:15:59.466142       1 config.go:309] "Starting node config controller"
	I1202 15:15:59.466157       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 15:15:59.466162       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 15:15:59.466183       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 15:15:59.466219       1 config.go:200] "Starting service config controller"
	I1202 15:15:59.466231       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 15:15:59.466205       1 config.go:106] "Starting endpoint slice config controller"
	I1202 15:15:59.466263       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 15:15:59.566412       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1202 15:15:59.566431       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 15:15:59.566517       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3850d6885c4a8427a31f9c1e3c8dfc49dde93cc3abd5127ae5b5e17c87485b87] <==
	E1202 15:15:50.912434       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1202 15:15:50.912471       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1202 15:15:50.912528       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1202 15:15:50.912901       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1202 15:15:50.912944       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1202 15:15:51.890107       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1202 15:15:51.920213       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1202 15:15:51.922190       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1202 15:15:51.923075       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1202 15:15:52.010654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1202 15:15:52.071843       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1202 15:15:52.119186       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1202 15:15:52.132528       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1202 15:15:52.144768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1202 15:15:52.154877       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1202 15:15:52.187203       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1202 15:15:52.196387       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1202 15:15:52.354774       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1202 15:15:54.408421       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 15:16:35.635194       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 15:16:35.635272       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1202 15:16:35.635369       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1202 15:16:35.635381       1 server.go:265] "[graceful-termination] secure server is exiting"
	I1202 15:16:35.635281       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E1202 15:16:35.635400       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c8fa8295f09d01cc139eda620db6d699a0081f04519fd714f09996c687592e9e] <==
	E1202 15:16:41.871504       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1202 15:16:41.883003       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1202 15:16:41.978092       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1202 15:16:42.223233       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1202 15:16:42.331606       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1202 15:16:44.153592       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1202 15:16:44.898779       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1202 15:16:45.096198       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1202 15:16:45.140830       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1202 15:16:45.280464       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1202 15:16:45.364422       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1202 15:16:45.426247       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1202 15:16:45.761211       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1202 15:16:45.954414       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1202 15:16:46.119488       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1202 15:16:46.263246       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1202 15:16:46.408248       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1202 15:16:46.563072       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1202 15:16:46.612942       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1202 15:16:47.095235       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1202 15:16:47.397805       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1202 15:16:47.457860       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1202 15:16:47.479575       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1202 15:16:47.828022       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1202 15:17:00.145159       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 02 15:21:07 functional-031973 kubelet[4768]: E1202 15:21:07.924849    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-ljrh9" podUID="e67844aa-4a0f-4537-a2b2-6900a351107b"
	Dec 02 15:21:17 functional-031973 kubelet[4768]: E1202 15:21:17.925019    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-b6pzr" podUID="269a2c1f-95ff-4df0-9a9d-7180a420df00"
	Dec 02 15:21:17 functional-031973 kubelet[4768]: E1202 15:21:17.925019    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-wk9xg" podUID="7267d7f8-815a-4d11-9486-1d841
04bf640"
	Dec 02 15:21:18 functional-031973 kubelet[4768]: E1202 15:21:18.923334    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="038e8210-22af-4586-bb52-4b0ff00eb7be"
	Dec 02 15:21:22 functional-031973 kubelet[4768]: E1202 15:21:22.924681    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-ljrh9" podUID="e67844aa-4a0f-4537-a2b2-6900a351107b"
	Dec 02 15:21:29 functional-031973 kubelet[4768]: E1202 15:21:29.925219    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-wk9xg" podUID="7267d7f8-815a-4d11-9486-1d841
04bf640"
	Dec 02 15:21:31 functional-031973 kubelet[4768]: E1202 15:21:31.923994    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="038e8210-22af-4586-bb52-4b0ff00eb7be"
	Dec 02 15:21:31 functional-031973 kubelet[4768]: E1202 15:21:31.925318    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-b6pzr" podUID="269a2c1f-95ff-4df0-9a9d-7180a420df00"
	Dec 02 15:21:36 functional-031973 kubelet[4768]: E1202 15:21:36.924858    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-ljrh9" podUID="e67844aa-4a0f-4537-a2b2-6900a351107b"
	Dec 02 15:21:42 functional-031973 kubelet[4768]: E1202 15:21:42.925125    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-b6pzr" podUID="269a2c1f-95ff-4df0-9a9d-7180a420df00"
	Dec 02 15:21:44 functional-031973 kubelet[4768]: E1202 15:21:44.925000    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-wk9xg" podUID="7267d7f8-815a-4d11-9486-1d841
04bf640"
	Dec 02 15:21:45 functional-031973 kubelet[4768]: E1202 15:21:45.924363    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="038e8210-22af-4586-bb52-4b0ff00eb7be"
	Dec 02 15:21:51 functional-031973 kubelet[4768]: E1202 15:21:51.924590    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-ljrh9" podUID="e67844aa-4a0f-4537-a2b2-6900a351107b"
	Dec 02 15:21:53 functional-031973 kubelet[4768]: E1202 15:21:53.927514    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-b6pzr" podUID="269a2c1f-95ff-4df0-9a9d-7180a420df00"
	Dec 02 15:21:58 functional-031973 kubelet[4768]: E1202 15:21:58.924761    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-wk9xg" podUID="7267d7f8-815a-4d11-9486-1d841
04bf640"
	Dec 02 15:22:00 functional-031973 kubelet[4768]: E1202 15:22:00.923573    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="038e8210-22af-4586-bb52-4b0ff00eb7be"
	Dec 02 15:22:03 functional-031973 kubelet[4768]: E1202 15:22:03.924995    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-ljrh9" podUID="e67844aa-4a0f-4537-a2b2-6900a351107b"
	Dec 02 15:22:07 functional-031973 kubelet[4768]: E1202 15:22:07.925376    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-b6pzr" podUID="269a2c1f-95ff-4df0-9a9d-7180a420df00"
	Dec 02 15:22:12 functional-031973 kubelet[4768]: E1202 15:22:12.924459    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="038e8210-22af-4586-bb52-4b0ff00eb7be"
	Dec 02 15:22:13 functional-031973 kubelet[4768]: E1202 15:22:13.925201    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-wk9xg" podUID="7267d7f8-815a-4d11-9486-1d841
04bf640"
	Dec 02 15:22:14 functional-031973 kubelet[4768]: E1202 15:22:14.924417    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-ljrh9" podUID="e67844aa-4a0f-4537-a2b2-6900a351107b"
	Dec 02 15:22:19 functional-031973 kubelet[4768]: E1202 15:22:19.925284    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-b6pzr" podUID="269a2c1f-95ff-4df0-9a9d-7180a420df00"
	Dec 02 15:22:23 functional-031973 kubelet[4768]: E1202 15:22:23.924071    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="038e8210-22af-4586-bb52-4b0ff00eb7be"
	Dec 02 15:22:25 functional-031973 kubelet[4768]: E1202 15:22:25.924453    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-ljrh9" podUID="e67844aa-4a0f-4537-a2b2-6900a351107b"
	Dec 02 15:22:27 functional-031973 kubelet[4768]: E1202 15:22:27.925442    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-wk9xg" podUID="7267d7f8-815a-4d11-9486-1d841
04bf640"
	
	
	==> storage-provisioner [85c6d8bf722dcb136812c6f14c45b5d380b1de637a1b3615b9d1d2b7fb98940c] <==
	W1202 15:16:10.586263       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1202 15:16:10.586468       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1202 15:16:10.586687       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-031973_397fa3ca-6b00-4900-82e0-268f547da5e7!
	I1202 15:16:10.586920       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"06f31c89-8177-4473-aea9-89a84ed0b889", APIVersion:"v1", ResourceVersion:"446", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-031973_397fa3ca-6b00-4900-82e0-268f547da5e7 became leader
	W1202 15:16:10.589335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:16:10.592545       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1202 15:16:10.687378       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-031973_397fa3ca-6b00-4900-82e0-268f547da5e7!
	W1202 15:16:12.596560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:16:12.601329       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:16:14.604965       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:16:14.614131       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:16:16.617205       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:16:16.621029       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:16:18.624813       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:16:18.630072       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:16:20.633409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:16:20.637710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:16:22.641582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:16:22.646626       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:16:24.650290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:16:24.655142       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:16:26.658929       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:16:26.664778       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:16:28.668074       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:16:28.672710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [ef1e68dc2307c5daf3aa5cdb63ca8b1bb338e7f8dfd850d51a666ac3747a2970] <==
	W1202 15:22:05.414158       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:22:07.417238       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:22:07.421879       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:22:09.425884       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:22:09.430109       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:22:11.433511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:22:11.437616       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:22:13.441118       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:22:13.445624       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:22:15.449074       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:22:15.454476       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:22:17.457437       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:22:17.462411       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:22:19.465786       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:22:19.470388       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:22:21.473986       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:22:21.479323       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:22:23.482687       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:22:23.486733       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:22:25.490154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:22:25.495164       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:22:27.498617       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:22:27.502865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:22:29.506067       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:22:29.510094       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-031973 -n functional-031973
helpers_test.go:269: (dbg) Run:  kubectl --context functional-031973 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount mysql-5bb876957f-ljrh9 sp-pod dashboard-metrics-scraper-77bf4d6c4c-wk9xg kubernetes-dashboard-855c9754f9-b6pzr
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-031973 describe pod busybox-mount mysql-5bb876957f-ljrh9 sp-pod dashboard-metrics-scraper-77bf4d6c4c-wk9xg kubernetes-dashboard-855c9754f9-b6pzr
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-031973 describe pod busybox-mount mysql-5bb876957f-ljrh9 sp-pod dashboard-metrics-scraper-77bf4d6c4c-wk9xg kubernetes-dashboard-855c9754f9-b6pzr: exit status 1 (85.33647ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-031973/192.168.49.2
	Start Time:       Tue, 02 Dec 2025 15:17:29 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  containerd://ad7eaef8b35d60e1dce92546738e84d4c79a3cf6d207f3f4a48a68cfd880b1ae
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Tue, 02 Dec 2025 15:17:32 +0000
	      Finished:     Tue, 02 Dec 2025 15:17:32 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x6jsv (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-x6jsv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m2s   default-scheduler  Successfully assigned default/busybox-mount to functional-031973
	  Normal  Pulling    5m2s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m     kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.085s (2.085s including waiting). Image size: 2395207 bytes.
	  Normal  Created    4m59s  kubelet            Created container: mount-munger
	  Normal  Started    4m59s  kubelet            Started container mount-munger
	
	
	Name:             mysql-5bb876957f-ljrh9
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-031973/192.168.49.2
	Start Time:       Tue, 02 Dec 2025 15:17:39 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:           10.244.0.11
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gk92d (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-gk92d:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  4m52s                default-scheduler  Successfully assigned default/mysql-5bb876957f-ljrh9 to functional-031973
	  Normal   Pulling    98s (x5 over 4m52s)  kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     95s (x5 over 4m49s)  kubelet            Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   95s (x5 over 4m49s)   kubelet  Error: ErrImagePull
	  Warning  Failed   40s (x15 over 4m48s)  kubelet  Error: ImagePullBackOff
	  Normal   BackOff  6s (x18 over 4m48s)   kubelet  Back-off pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-031973/192.168.49.2
	Start Time:       Tue, 02 Dec 2025 15:17:22 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-slhv5 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-slhv5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  5m9s                  default-scheduler  Successfully assigned default/sp-pod to functional-031973
	  Warning  Failed     3m43s (x4 over 5m7s)  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling  2m15s (x5 over 5m9s)  kubelet  Pulling image "docker.io/nginx"
	  Warning  Failed   2m12s (x5 over 5m7s)  kubelet  Error: ErrImagePull
	  Warning  Failed   2m12s                 kubelet  Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   73s (x15 over 5m6s)  kubelet  Error: ImagePullBackOff
	  Normal   BackOff  8s (x20 over 5m6s)   kubelet  Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-wk9xg" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-b6pzr" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-031973 describe pod busybox-mount mysql-5bb876957f-ljrh9 sp-pod dashboard-metrics-scraper-77bf4d6c4c-wk9xg kubernetes-dashboard-855c9754f9-b6pzr: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (302.32s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (369.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [9024b0c9-a595-47ac-b71f-a113ecd27d5b] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003654243s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-031973 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-031973 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-031973 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-031973 apply -f testdata/storage-provisioner/pod.yaml
I1202 15:17:22.023983  406799 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [038e8210-22af-4586-bb52-4b0ff00eb7be] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
functional_test_pvc_test.go:140: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 6m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-031973 -n functional-031973
functional_test_pvc_test.go:140: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-12-02 15:23:22.371397005 +0000 UTC m=+839.734290497
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-031973 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-031973 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-031973/192.168.49.2
Start Time:       Tue, 02 Dec 2025 15:17:22 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:  10.244.0.7
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ErrImagePull
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-slhv5 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-slhv5:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  6m                     default-scheduler  Successfully assigned default/sp-pod to functional-031973
Warning  Failed     4m34s (x4 over 5m58s)  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests
toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling  3m6s (x5 over 6m)     kubelet  Pulling image "docker.io/nginx"
Warning  Failed   3m3s (x5 over 5m58s)  kubelet  Error: ErrImagePull
Warning  Failed   3m3s                  kubelet  Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541: 429 Too Many Requests
toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff  44s (x21 over 5m57s)  kubelet  Back-off pulling image "docker.io/nginx"
Warning  Failed   44s (x21 over 5m57s)  kubelet  Error: ImagePullBackOff
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-031973 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) Non-zero exit: kubectl --context functional-031973 logs sp-pod -n default: exit status 1 (71.364417ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: image can't be pulled

                                                
                                                
** /stderr **
functional_test_pvc_test.go:140: kubectl --context functional-031973 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-031973
helpers_test.go:243: (dbg) docker inspect functional-031973:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8e6415af0faf450563a814c4320c99a3fdffa7cb0ee3328d6db07a2fba5353e3",
	        "Created": "2025-12-02T15:15:37.382465049Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 437199,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T15:15:37.417630105Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/8e6415af0faf450563a814c4320c99a3fdffa7cb0ee3328d6db07a2fba5353e3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8e6415af0faf450563a814c4320c99a3fdffa7cb0ee3328d6db07a2fba5353e3/hostname",
	        "HostsPath": "/var/lib/docker/containers/8e6415af0faf450563a814c4320c99a3fdffa7cb0ee3328d6db07a2fba5353e3/hosts",
	        "LogPath": "/var/lib/docker/containers/8e6415af0faf450563a814c4320c99a3fdffa7cb0ee3328d6db07a2fba5353e3/8e6415af0faf450563a814c4320c99a3fdffa7cb0ee3328d6db07a2fba5353e3-json.log",
	        "Name": "/functional-031973",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-031973:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-031973",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8e6415af0faf450563a814c4320c99a3fdffa7cb0ee3328d6db07a2fba5353e3",
	                "LowerDir": "/var/lib/docker/overlay2/ff8e501cc39f97b2264b9620db8c3575efd7e10f0796e3fc558490e7b693b56b-init/diff:/var/lib/docker/overlay2/b24a03799b584404f04c044a7327612eb3ab66b1330d1bf57134456e5f41230d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ff8e501cc39f97b2264b9620db8c3575efd7e10f0796e3fc558490e7b693b56b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ff8e501cc39f97b2264b9620db8c3575efd7e10f0796e3fc558490e7b693b56b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ff8e501cc39f97b2264b9620db8c3575efd7e10f0796e3fc558490e7b693b56b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-031973",
	                "Source": "/var/lib/docker/volumes/functional-031973/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-031973",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-031973",
	                "name.minikube.sigs.k8s.io": "functional-031973",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "5c60273079da1f9d4e348ddcae81f0a2346ec733b5680c77eb71ba260385fd94",
	            "SandboxKey": "/var/run/docker/netns/5c60273079da",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33165"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33166"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33169"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33167"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33168"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-031973": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "072e297832857b662108017b58f1caabb1f529b2dbb839e022eeb4c01cc96da4",
	                    "EndpointID": "60b5bde8cb58337b502aeac0f46839fc2f8c145ed5188498e6f8715b9c69a2f9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "92:85:11:0a:bb:d6",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-031973",
	                        "8e6415af0faf"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-031973 -n functional-031973
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-031973 logs -n 25: (1.32617752s)
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                              ARGS                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-031973 image rm kicbase/echo-server:functional-031973 --alsologtostderr                                              │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ image          │ functional-031973 image ls                                                                                                      │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ ssh            │ functional-031973 ssh findmnt -T /mount1                                                                                        │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ image          │ functional-031973 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ ssh            │ functional-031973 ssh findmnt -T /mount2                                                                                        │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ image          │ functional-031973 image ls                                                                                                      │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ ssh            │ functional-031973 ssh findmnt -T /mount3                                                                                        │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ image          │ functional-031973 image save --daemon kicbase/echo-server:functional-031973 --alsologtostderr                                   │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ mount          │ -p functional-031973 --kill=true                                                                                                │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │                     │
	│ cp             │ functional-031973 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                              │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ ssh            │ functional-031973 ssh -n functional-031973 sudo cat /home/docker/cp-test.txt                                                    │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ cp             │ functional-031973 cp functional-031973:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2576747610/001/cp-test.txt      │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ ssh            │ functional-031973 ssh -n functional-031973 sudo cat /home/docker/cp-test.txt                                                    │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ cp             │ functional-031973 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                       │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ ssh            │ functional-031973 ssh -n functional-031973 sudo cat /tmp/does/not/exist/cp-test.txt                                             │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ update-context │ functional-031973 update-context --alsologtostderr -v=2                                                                         │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ update-context │ functional-031973 update-context --alsologtostderr -v=2                                                                         │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ update-context │ functional-031973 update-context --alsologtostderr -v=2                                                                         │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ image          │ functional-031973 image ls --format short --alsologtostderr                                                                     │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ image          │ functional-031973 image ls --format yaml --alsologtostderr                                                                      │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ ssh            │ functional-031973 ssh pgrep buildkitd                                                                                           │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │                     │
	│ image          │ functional-031973 image build -t localhost/my-image:functional-031973 testdata/build --alsologtostderr                          │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ image          │ functional-031973 image ls                                                                                                      │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ image          │ functional-031973 image ls --format json --alsologtostderr                                                                      │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ image          │ functional-031973 image ls --format table --alsologtostderr                                                                     │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 15:17:28
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 15:17:28.857802  449198 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:17:28.858147  449198 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:17:28.858159  449198 out.go:374] Setting ErrFile to fd 2...
	I1202 15:17:28.858167  449198 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:17:28.858525  449198 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-403182/.minikube/bin
	I1202 15:17:28.859129  449198 out.go:368] Setting JSON to false
	I1202 15:17:28.860514  449198 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":7191,"bootTime":1764681458,"procs":245,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 15:17:28.860594  449198 start.go:143] virtualization: kvm guest
	I1202 15:17:28.862751  449198 out.go:179] * [functional-031973] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 15:17:28.864828  449198 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 15:17:28.864854  449198 notify.go:221] Checking for updates...
	I1202 15:17:28.867583  449198 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 15:17:28.868765  449198 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-403182/kubeconfig
	I1202 15:17:28.870531  449198 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-403182/.minikube
	I1202 15:17:28.871999  449198 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 15:17:28.873798  449198 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 15:17:28.875560  449198 config.go:182] Loaded profile config "functional-031973": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1202 15:17:28.876221  449198 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 15:17:28.903402  449198 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 15:17:28.903623  449198 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 15:17:28.974207  449198 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-02 15:17:28.961998728 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 15:17:28.974366  449198 docker.go:319] overlay module found
	I1202 15:17:28.977298  449198 out.go:179] * Using the docker driver based on existing profile
	I1202 15:17:28.978780  449198 start.go:309] selected driver: docker
	I1202 15:17:28.978801  449198 start.go:927] validating driver "docker" against &{Name:functional-031973 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-031973 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 15:17:28.978924  449198 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 15:17:28.979041  449198 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 15:17:29.050244  449198 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-02 15:17:29.038869576 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 15:17:29.050908  449198 cni.go:84] Creating CNI manager for ""
	I1202 15:17:29.051007  449198 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 15:17:29.051074  449198 start.go:353] cluster config:
	{Name:functional-031973 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-031973 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 15:17:29.054245  449198 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	ad7eaef8b35d6       56cc512116c8f       5 minutes ago       Exited              mount-munger              0                   46052ac8f3adc       busybox-mount                               default
	4ab64fdb4167c       d4918ca78576a       6 minutes ago       Running             nginx                     0                   9638d534f9cc1       nginx-svc                                   default
	ea3aa607d0865       9056ab77afb8e       6 minutes ago       Running             echo-server               0                   38683d7f68a4a       hello-node-75c85bcc94-8dm24                 default
	ae1aa2afadc73       9056ab77afb8e       6 minutes ago       Running             echo-server               0                   7454f7d21eb41       hello-node-connect-7d85dfc575-hncff         default
	c337850ffec5c       01e8bacf0f500       6 minutes ago       Running             kube-controller-manager   2                   de46fe6f6caf9       kube-controller-manager-functional-031973   kube-system
	cadd1246401e2       a5f569d49a979       6 minutes ago       Running             kube-apiserver            0                   2adea7d3feb75       kube-apiserver-functional-031973            kube-system
	646c1b88c2291       a3e246e9556e9       6 minutes ago       Running             etcd                      1                   03c0f3cc2c9a0       etcd-functional-031973                      kube-system
	569b0e142e127       8aa150647e88a       6 minutes ago       Running             kube-proxy                1                   bbc9d3c7d3116       kube-proxy-zpxn7                            kube-system
	06c1524f89b8b       01e8bacf0f500       6 minutes ago       Exited              kube-controller-manager   1                   de46fe6f6caf9       kube-controller-manager-functional-031973   kube-system
	c8fa8295f09d0       88320b5498ff2       6 minutes ago       Running             kube-scheduler            1                   f9c0c5bed4df1       kube-scheduler-functional-031973            kube-system
	b48e38bef4b45       52546a367cc9e       6 minutes ago       Running             coredns                   1                   006f7e1e9e593       coredns-66bc5c9577-b94tb                    kube-system
	19f21b5e8580d       409467f978b4a       6 minutes ago       Running             kindnet-cni               1                   b914ab31d57c5       kindnet-z4gbw                               kube-system
	ef1e68dc2307c       6e38f40d628db       6 minutes ago       Running             storage-provisioner       1                   48df44372fd3e       storage-provisioner                         kube-system
	85c6d8bf722dc       6e38f40d628db       7 minutes ago       Exited              storage-provisioner       0                   48df44372fd3e       storage-provisioner                         kube-system
	84288ecd238d1       52546a367cc9e       7 minutes ago       Exited              coredns                   0                   006f7e1e9e593       coredns-66bc5c9577-b94tb                    kube-system
	8db4008180d89       409467f978b4a       7 minutes ago       Exited              kindnet-cni               0                   b914ab31d57c5       kindnet-z4gbw                               kube-system
	c2d18e41fc203       8aa150647e88a       7 minutes ago       Exited              kube-proxy                0                   bbc9d3c7d3116       kube-proxy-zpxn7                            kube-system
	3850d6885c4a8       88320b5498ff2       7 minutes ago       Exited              kube-scheduler            0                   f9c0c5bed4df1       kube-scheduler-functional-031973            kube-system
	7a94f88d6c9f0       a3e246e9556e9       7 minutes ago       Exited              etcd                      0                   03c0f3cc2c9a0       etcd-functional-031973                      kube-system
	
	
	==> containerd <==
	Dec 02 15:22:18 functional-031973 containerd[3792]: time="2025-12-02T15:22:18.230259490Z" level=info msg="container event discarded" container=ae1aa2afadc7333251d88624ce4c05c6201cf7820e7b6f240b0f8f750b5dd3d4 type=CONTAINER_STARTED_EVENT
	Dec 02 15:22:18 functional-031973 containerd[3792]: time="2025-12-02T15:22:18.795069032Z" level=info msg="container event discarded" container=ea3aa607d0865f5186d05378998d4e3ba27baa0ba6dc06509421ece22a3f8a34 type=CONTAINER_CREATED_EVENT
	Dec 02 15:22:18 functional-031973 containerd[3792]: time="2025-12-02T15:22:18.844444763Z" level=info msg="container event discarded" container=ea3aa607d0865f5186d05378998d4e3ba27baa0ba6dc06509421ece22a3f8a34 type=CONTAINER_STARTED_EVENT
	Dec 02 15:22:21 functional-031973 containerd[3792]: time="2025-12-02T15:22:21.867316613Z" level=info msg="container event discarded" container=4ab64fdb4167c83743b157f78cfa339cf34c3f3c7d8a65af7efa8add4db4edc3 type=CONTAINER_CREATED_EVENT
	Dec 02 15:22:21 functional-031973 containerd[3792]: time="2025-12-02T15:22:21.922392722Z" level=info msg="container event discarded" container=4ab64fdb4167c83743b157f78cfa339cf34c3f3c7d8a65af7efa8add4db4edc3 type=CONTAINER_STARTED_EVENT
	Dec 02 15:22:22 functional-031973 containerd[3792]: time="2025-12-02T15:22:22.448013865Z" level=info msg="container event discarded" container=c824e559fd6d7eb233460e9bee97a66eba020ffc704242346ca4aa69bf60b56d type=CONTAINER_CREATED_EVENT
	Dec 02 15:22:22 functional-031973 containerd[3792]: time="2025-12-02T15:22:22.448071031Z" level=info msg="container event discarded" container=c824e559fd6d7eb233460e9bee97a66eba020ffc704242346ca4aa69bf60b56d type=CONTAINER_STARTED_EVENT
	Dec 02 15:22:29 functional-031973 containerd[3792]: time="2025-12-02T15:22:29.902576613Z" level=info msg="container event discarded" container=46052ac8f3adcd68ecfaf90d766201ffb3debad7351ccda45734ca06392e5c13 type=CONTAINER_CREATED_EVENT
	Dec 02 15:22:29 functional-031973 containerd[3792]: time="2025-12-02T15:22:29.902816399Z" level=info msg="container event discarded" container=46052ac8f3adcd68ecfaf90d766201ffb3debad7351ccda45734ca06392e5c13 type=CONTAINER_STARTED_EVENT
	Dec 02 15:22:30 functional-031973 containerd[3792]: time="2025-12-02T15:22:30.386699943Z" level=info msg="container event discarded" container=f32572dc4817cfef4dfa962257d762112dda58fa4d151cccbefafd17b6a48bbe type=CONTAINER_CREATED_EVENT
	Dec 02 15:22:30 functional-031973 containerd[3792]: time="2025-12-02T15:22:30.386749637Z" level=info msg="container event discarded" container=f32572dc4817cfef4dfa962257d762112dda58fa4d151cccbefafd17b6a48bbe type=CONTAINER_STARTED_EVENT
	Dec 02 15:22:30 functional-031973 containerd[3792]: time="2025-12-02T15:22:30.406719754Z" level=info msg="container event discarded" container=336f2317270d813565cfc63d9e836f5fcb26d9d718283f98fac57293070f0a14 type=CONTAINER_CREATED_EVENT
	Dec 02 15:22:30 functional-031973 containerd[3792]: time="2025-12-02T15:22:30.406776839Z" level=info msg="container event discarded" container=336f2317270d813565cfc63d9e836f5fcb26d9d718283f98fac57293070f0a14 type=CONTAINER_STARTED_EVENT
	Dec 02 15:22:32 functional-031973 containerd[3792]: time="2025-12-02T15:22:32.012449104Z" level=info msg="container event discarded" container=ad7eaef8b35d60e1dce92546738e84d4c79a3cf6d207f3f4a48a68cfd880b1ae type=CONTAINER_CREATED_EVENT
	Dec 02 15:22:32 functional-031973 containerd[3792]: time="2025-12-02T15:22:32.078856331Z" level=info msg="container event discarded" container=ad7eaef8b35d60e1dce92546738e84d4c79a3cf6d207f3f4a48a68cfd880b1ae type=CONTAINER_STARTED_EVENT
	Dec 02 15:22:32 functional-031973 containerd[3792]: time="2025-12-02T15:22:32.238945913Z" level=info msg="container event discarded" container=ad7eaef8b35d60e1dce92546738e84d4c79a3cf6d207f3f4a48a68cfd880b1ae type=CONTAINER_STOPPED_EVENT
	Dec 02 15:22:34 functional-031973 containerd[3792]: time="2025-12-02T15:22:34.200856807Z" level=info msg="container event discarded" container=46052ac8f3adcd68ecfaf90d766201ffb3debad7351ccda45734ca06392e5c13 type=CONTAINER_STOPPED_EVENT
	Dec 02 15:22:39 functional-031973 containerd[3792]: time="2025-12-02T15:22:39.594621488Z" level=info msg="container event discarded" container=51b1724cf93e14593add8d21fa5b3719ed852650e57cfdda29df6946161328b6 type=CONTAINER_CREATED_EVENT
	Dec 02 15:22:39 functional-031973 containerd[3792]: time="2025-12-02T15:22:39.594706786Z" level=info msg="container event discarded" container=51b1724cf93e14593add8d21fa5b3719ed852650e57cfdda29df6946161328b6 type=CONTAINER_STARTED_EVENT
	Dec 02 15:22:47 functional-031973 containerd[3792]: time="2025-12-02T15:22:47.946455686Z" level=info msg="container event discarded" container=9cfeb97bab0c34c1f07b51fcaa1b2f35df812739d282befec416a6728b6af663 type=CONTAINER_DELETED_EVENT
	Dec 02 15:22:47 functional-031973 containerd[3792]: time="2025-12-02T15:22:47.946561131Z" level=info msg="container event discarded" container=325c273b19ce3626ee1377f7cbb1bb57de4b739c3413425a92dcdd79c186257e type=CONTAINER_DELETED_EVENT
	Dec 02 15:23:05 functional-031973 containerd[3792]: time="2025-12-02T15:23:05.924680753Z" level=info msg="PullImage \"docker.io/nginx:latest\""
	Dec 02 15:23:08 functional-031973 containerd[3792]: time="2025-12-02T15:23:08.552338443Z" level=error msg="PullImage \"docker.io/nginx:latest\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541: 429 Too Many Requests\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 02 15:23:08 functional-031973 containerd[3792]: time="2025-12-02T15:23:08.552424182Z" level=info msg="stop pulling image docker.io/library/nginx:latest: active requests=0, bytes read=21196"
	Dec 02 15:23:22 functional-031973 containerd[3792]: time="2025-12-02T15:23:22.925135659Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	
	
	==> coredns [84288ecd238d1ae9a22d0f967cce2f858ff120a649bf4bb1ed143ac2e88eae81] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46439 - 44210 "HINFO IN 7161019882344419339.5475944101483733167. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.0705958s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b48e38bef4b45aff4e9fe11ebb9238a1fa36a1eb7ac89a19b04a3c28f80f0997] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49535 - 17750 "HINFO IN 3721344718356208668.6506403759620066385. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.026959352s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               functional-031973
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-031973
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689
	                    minikube.k8s.io/name=functional-031973
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T15_15_54_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 15:15:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-031973
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 15:23:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 15:20:55 +0000   Tue, 02 Dec 2025 15:15:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 15:20:55 +0000   Tue, 02 Dec 2025 15:15:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 15:20:55 +0000   Tue, 02 Dec 2025 15:15:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 15:20:55 +0000   Tue, 02 Dec 2025 15:16:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-031973
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                120a04eb-b735-4dd2-a8e6-bf3b871cface
	  Boot ID:                    54b7568c-9bf9-47f9-8d68-e36a3a33af00
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-8dm24                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m7s
	  default                     hello-node-connect-7d85dfc575-hncff           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m8s
	  default                     mysql-5bb876957f-ljrh9                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     5m44s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m7s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  kube-system                 coredns-66bc5c9577-b94tb                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     7m24s
	  kube-system                 etcd-functional-031973                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7m30s
	  kube-system                 kindnet-z4gbw                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      7m26s
	  kube-system                 kube-apiserver-functional-031973              250m (3%)     0 (0%)      0 (0%)           0 (0%)         6m33s
	  kube-system                 kube-controller-manager-functional-031973     200m (2%)     0 (0%)      0 (0%)           0 (0%)         7m30s
	  kube-system                 kube-proxy-zpxn7                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m26s
	  kube-system                 kube-scheduler-functional-031973              100m (1%)     0 (0%)      0 (0%)           0 (0%)         7m30s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m24s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-wk9xg    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-b6pzr         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m24s                  kube-proxy       
	  Normal  Starting                 6m25s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  7m35s (x8 over 7m35s)  kubelet          Node functional-031973 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m35s (x8 over 7m35s)  kubelet          Node functional-031973 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m35s (x7 over 7m35s)  kubelet          Node functional-031973 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m30s                  kubelet          Node functional-031973 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  7m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    7m30s                  kubelet          Node functional-031973 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m30s                  kubelet          Node functional-031973 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m30s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           7m26s                  node-controller  Node functional-031973 event: Registered Node functional-031973 in Controller
	  Normal  NodeReady                7m13s                  kubelet          Node functional-031973 status is now: NodeReady
	  Normal  Starting                 6m36s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m35s (x8 over 6m36s)  kubelet          Node functional-031973 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m35s (x8 over 6m36s)  kubelet          Node functional-031973 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m35s (x7 over 6m36s)  kubelet          Node functional-031973 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m30s                  node-controller  Node functional-031973 event: Registered Node functional-031973 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff c2 f5 ca ac 67 17 08 06
	[ +13.571564] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 56 96 e2 dd 40 21 08 06
	[  +0.000361] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff c2 f5 ca ac 67 17 08 06
	[  +2.699615] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 7e 77 c0 d8 ea 13 08 06
	[Dec 2 14:52] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 06 3c f9 c8 55 0b 08 06
	[  +0.118748] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 02 d8 9f 3f ef 99 08 06
	[  +0.856727] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b2 fb 9f 63 58 4b 08 06
	[ +14.974602] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff aa c3 c5 ff a1 a9 08 06
	[  +0.000340] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 7e 77 c0 d8 ea 13 08 06
	[  +2.666742] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 5e 20 e4 1d 98 08 06
	[  +0.000370] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 fb 9f 63 58 4b 08 06
	[ +24.223711] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 09 24 19 b9 42 08 06
	[  +0.000349] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 02 d8 9f 3f ef 99 08 06
	
	
	==> etcd [646c1b88c2291cb75cfbfa0d6acbe8c8f6efeb9548850bda8083a0da895f1895] <==
	{"level":"warn","ts":"2025-12-02T15:16:49.368918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.376172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.384407Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.406190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.412902Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.419629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.426243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.432784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.439580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.448532Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.458893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.466153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.473143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.480092Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.486754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.494019Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.503161Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.511195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.518025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.534149Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.540735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.558898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.565791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.573609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.627411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55162","server-name":"","error":"EOF"}
	
	
	==> etcd [7a94f88d6c9f001c713f28d38be30c8b80117dc154363dfccf439f82d547fabb] <==
	{"level":"warn","ts":"2025-12-02T15:15:50.393142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:15:50.400004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:15:50.407216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:15:50.429152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:15:50.436768Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:15:50.448340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:15:50.489896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50438","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-02T15:16:45.855107Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-02T15:16:45.855196Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-031973","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-12-02T15:16:45.855325Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-02T15:16:45.857011Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-02T15:16:45.858427Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-02T15:16:45.858500Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-12-02T15:16:45.858534Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-02T15:16:45.858519Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-02T15:16:45.858525Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-12-02T15:16:45.858554Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-02T15:16:45.858567Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-02T15:16:45.858575Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-02T15:16:45.858583Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-12-02T15:16:45.858586Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-02T15:16:45.860615Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-12-02T15:16:45.860709Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-02T15:16:45.860741Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-12-02T15:16:45.860752Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-031973","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 15:23:23 up  2:05,  0 user,  load average: 0.07, 0.45, 0.78
	Linux functional-031973 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [19f21b5e8580d8f28a81006ac30e2cb2f04cbd5dcb33e97d6895451934417eeb] <==
	I1202 15:21:16.941891       1 main.go:301] handling current node
	I1202 15:21:26.941322       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:21:26.941380       1 main.go:301] handling current node
	I1202 15:21:36.940708       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:21:36.940755       1 main.go:301] handling current node
	I1202 15:21:46.946787       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:21:46.946828       1 main.go:301] handling current node
	I1202 15:21:56.941427       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:21:56.941463       1 main.go:301] handling current node
	I1202 15:22:06.949323       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:22:06.949362       1 main.go:301] handling current node
	I1202 15:22:16.944491       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:22:16.944530       1 main.go:301] handling current node
	I1202 15:22:26.940723       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:22:26.940762       1 main.go:301] handling current node
	I1202 15:22:36.940537       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:22:36.940567       1 main.go:301] handling current node
	I1202 15:22:46.943788       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:22:46.943824       1 main.go:301] handling current node
	I1202 15:22:56.940850       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:22:56.940893       1 main.go:301] handling current node
	I1202 15:23:06.949813       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:23:06.949849       1 main.go:301] handling current node
	I1202 15:23:16.942865       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:23:16.942905       1 main.go:301] handling current node
	
	
	==> kindnet [8db4008180d89c313b691c3ffc28ed67067eecede802fad652ac37fd6fd36acd] <==
	I1202 15:15:59.740136       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1202 15:15:59.740462       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1202 15:15:59.740638       1 main.go:148] setting mtu 1500 for CNI 
	I1202 15:15:59.740656       1 main.go:178] kindnetd IP family: "ipv4"
	I1202 15:15:59.740718       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-02T15:15:59Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1202 15:15:59.942159       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1202 15:15:59.942193       1 controller.go:381] "Waiting for informer caches to sync"
	I1202 15:15:59.942205       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1202 15:15:59.942356       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1202 15:16:00.448425       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1202 15:16:00.448465       1 metrics.go:72] Registering metrics
	I1202 15:16:00.448562       1 controller.go:711] "Syncing nftables rules"
	I1202 15:16:09.943353       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:16:09.943468       1 main.go:301] handling current node
	I1202 15:16:19.947849       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:16:19.947892       1 main.go:301] handling current node
	I1202 15:16:29.945800       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:16:29.945842       1 main.go:301] handling current node
	
	
	==> kube-apiserver [cadd1246401e2709608c00abb7ba9788bcb8e60e9c15eb367d9009429020ba0f] <==
	I1202 15:16:50.088066       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1202 15:16:50.088162       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1202 15:16:50.088208       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1202 15:16:50.092638       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1202 15:16:50.097887       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1202 15:16:50.111508       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 15:16:50.123577       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1202 15:16:50.127784       1 controller.go:667] quota admission added evaluator for: endpoints
	I1202 15:16:50.926982       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1202 15:16:50.991403       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1202 15:16:51.297655       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1202 15:16:51.303790       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1202 15:16:51.778956       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1202 15:16:51.875708       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1202 15:16:51.933585       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1202 15:16:51.941645       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1202 15:16:53.753031       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1202 15:17:10.530696       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.106.2.222"}
	I1202 15:17:15.965948       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.99.119.76"}
	I1202 15:17:16.154741       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.104.98.205"}
	I1202 15:17:16.202184       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.109.87.195"}
	I1202 15:17:29.825060       1 controller.go:667] quota admission added evaluator for: namespaces
	I1202 15:17:29.960687       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.153.94"}
	I1202 15:17:29.973650       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.6.186"}
	I1202 15:17:39.107718       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.100.191.136"}
	
	
	==> kube-controller-manager [06c1524f89b8b1a6e7711d8cea9dec8e489ce09bfdb7e9eeadd318646ca74233] <==
	I1202 15:16:37.392510       1 serving.go:386] Generated self-signed cert in-memory
	I1202 15:16:38.150126       1 controllermanager.go:191] "Starting" version="v1.34.2"
	I1202 15:16:38.150155       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 15:16:38.151661       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1202 15:16:38.151685       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1202 15:16:38.152002       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1202 15:16:38.152052       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1202 15:16:48.154086       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-controller-manager [c337850ffec5cbfad547c07320b3343ad60dcf859bf98a12c95ac2636f334b66] <==
	I1202 15:16:53.453783       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1202 15:16:53.453829       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1202 15:16:53.453856       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1202 15:16:53.453874       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1202 15:16:53.453881       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1202 15:16:53.453920       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1202 15:16:53.454445       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1202 15:16:53.455458       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1202 15:16:53.455556       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1202 15:16:53.458938       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1202 15:16:53.461334       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 15:16:53.461352       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1202 15:16:53.461363       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1202 15:16:53.461628       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1202 15:16:53.463921       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1202 15:16:53.464062       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1202 15:16:53.468396       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1202 15:16:53.470743       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1202 15:17:29.890175       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:17:29.895423       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:17:29.901866       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:17:29.902541       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:17:29.908349       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:17:29.912406       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:17:29.913104       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [569b0e142e12766db902223ca7eb146be3849a69f3c33df418b36923d82a585a] <==
	I1202 15:16:36.597068       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1202 15:16:36.598151       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-031973&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1202 15:16:37.841108       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-031973&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1202 15:16:40.338703       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-031973&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1202 15:16:45.857370       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-031973&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1202 15:16:58.598080       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1202 15:16:58.598118       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1202 15:16:58.598231       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 15:16:58.621795       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 15:16:58.621848       1 server_linux.go:132] "Using iptables Proxier"
	I1202 15:16:58.627324       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 15:16:58.627586       1 server.go:527] "Version info" version="v1.34.2"
	I1202 15:16:58.627599       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 15:16:58.628806       1 config.go:200] "Starting service config controller"
	I1202 15:16:58.628830       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 15:16:58.628871       1 config.go:106] "Starting endpoint slice config controller"
	I1202 15:16:58.628889       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 15:16:58.628923       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 15:16:58.628876       1 config.go:309] "Starting node config controller"
	I1202 15:16:58.628957       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 15:16:58.628966       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 15:16:58.628972       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 15:16:58.729789       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1202 15:16:58.729915       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 15:16:58.729953       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [c2d18e41fc203eee96d6a09dfee77221ae299daef844af8d7758972f0d5eebd6] <==
	I1202 15:15:59.236263       1 server_linux.go:53] "Using iptables proxy"
	I1202 15:15:59.310604       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1202 15:15:59.411054       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1202 15:15:59.411094       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1202 15:15:59.411211       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 15:15:59.458243       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 15:15:59.458294       1 server_linux.go:132] "Using iptables Proxier"
	I1202 15:15:59.464059       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 15:15:59.464551       1 server.go:527] "Version info" version="v1.34.2"
	I1202 15:15:59.464588       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 15:15:59.466130       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 15:15:59.466142       1 config.go:309] "Starting node config controller"
	I1202 15:15:59.466157       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 15:15:59.466162       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 15:15:59.466183       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 15:15:59.466219       1 config.go:200] "Starting service config controller"
	I1202 15:15:59.466231       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 15:15:59.466205       1 config.go:106] "Starting endpoint slice config controller"
	I1202 15:15:59.466263       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 15:15:59.566412       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1202 15:15:59.566431       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 15:15:59.566517       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3850d6885c4a8427a31f9c1e3c8dfc49dde93cc3abd5127ae5b5e17c87485b87] <==
	E1202 15:15:50.912434       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1202 15:15:50.912471       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1202 15:15:50.912528       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1202 15:15:50.912901       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1202 15:15:50.912944       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1202 15:15:51.890107       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1202 15:15:51.920213       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1202 15:15:51.922190       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1202 15:15:51.923075       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1202 15:15:52.010654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1202 15:15:52.071843       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1202 15:15:52.119186       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1202 15:15:52.132528       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1202 15:15:52.144768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1202 15:15:52.154877       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1202 15:15:52.187203       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1202 15:15:52.196387       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1202 15:15:52.354774       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1202 15:15:54.408421       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 15:16:35.635194       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 15:16:35.635272       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1202 15:16:35.635369       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1202 15:16:35.635381       1 server.go:265] "[graceful-termination] secure server is exiting"
	I1202 15:16:35.635281       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E1202 15:16:35.635400       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c8fa8295f09d01cc139eda620db6d699a0081f04519fd714f09996c687592e9e] <==
	E1202 15:16:41.871504       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1202 15:16:41.883003       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1202 15:16:41.978092       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1202 15:16:42.223233       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1202 15:16:42.331606       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1202 15:16:44.153592       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1202 15:16:44.898779       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1202 15:16:45.096198       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1202 15:16:45.140830       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1202 15:16:45.280464       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1202 15:16:45.364422       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1202 15:16:45.426247       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1202 15:16:45.761211       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1202 15:16:45.954414       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1202 15:16:46.119488       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1202 15:16:46.263246       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1202 15:16:46.408248       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1202 15:16:46.563072       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1202 15:16:46.612942       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1202 15:16:47.095235       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1202 15:16:47.397805       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1202 15:16:47.457860       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1202 15:16:47.479575       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1202 15:16:47.828022       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1202 15:17:00.145159       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 02 15:22:41 functional-031973 kubelet[4768]: E1202 15:22:41.924396    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-wk9xg" podUID="7267d7f8-815a-4d11-9486-1d841
04bf640"
	Dec 02 15:22:45 functional-031973 kubelet[4768]: E1202 15:22:45.924787    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-b6pzr" podUID="269a2c1f-95ff-4df0-9a9d-7180a420df00"
	Dec 02 15:22:50 functional-031973 kubelet[4768]: E1202 15:22:50.924354    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="038e8210-22af-4586-bb52-4b0ff00eb7be"
	Dec 02 15:22:51 functional-031973 kubelet[4768]: E1202 15:22:51.924857    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-ljrh9" podUID="e67844aa-4a0f-4537-a2b2-6900a351107b"
	Dec 02 15:22:54 functional-031973 kubelet[4768]: E1202 15:22:54.925083    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-wk9xg" podUID="7267d7f8-815a-4d11-9486-1d841
04bf640"
	Dec 02 15:22:59 functional-031973 kubelet[4768]: E1202 15:22:59.924795    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-b6pzr" podUID="269a2c1f-95ff-4df0-9a9d-7180a420df00"
	Dec 02 15:23:03 functional-031973 kubelet[4768]: E1202 15:23:03.928466    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-ljrh9" podUID="e67844aa-4a0f-4537-a2b2-6900a351107b"
	Dec 02 15:23:05 functional-031973 kubelet[4768]: E1202 15:23:05.925044    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-wk9xg" podUID="7267d7f8-815a-4d11-9486-1d841
04bf640"
	Dec 02 15:23:08 functional-031973 kubelet[4768]: E1202 15:23:08.552785    4768 log.go:32] "PullImage from image service failed" err=<
	Dec 02 15:23:08 functional-031973 kubelet[4768]:         rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541: 429 Too Many Requests
	Dec 02 15:23:08 functional-031973 kubelet[4768]:         toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	Dec 02 15:23:08 functional-031973 kubelet[4768]:  > image="docker.io/nginx:latest"
	Dec 02 15:23:08 functional-031973 kubelet[4768]: E1202 15:23:08.552839    4768 kuberuntime_image.go:43] "Failed to pull image" err=<
	Dec 02 15:23:08 functional-031973 kubelet[4768]:         failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541: 429 Too Many Requests
	Dec 02 15:23:08 functional-031973 kubelet[4768]:         toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	Dec 02 15:23:08 functional-031973 kubelet[4768]:  > image="docker.io/nginx:latest"
	Dec 02 15:23:08 functional-031973 kubelet[4768]: E1202 15:23:08.552983    4768 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Dec 02 15:23:08 functional-031973 kubelet[4768]:         container myfrontend start failed in pod sp-pod_default(038e8210-22af-4586-bb52-4b0ff00eb7be): ErrImagePull: failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541: 429 Too Many Requests
	Dec 02 15:23:08 functional-031973 kubelet[4768]:         toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	Dec 02 15:23:08 functional-031973 kubelet[4768]:  > logger="UnhandledError"
	Dec 02 15:23:08 functional-031973 kubelet[4768]: E1202 15:23:08.553013    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="038e8210-22af-4586-bb52-4b0ff00eb7be"
	Dec 02 15:23:10 functional-031973 kubelet[4768]: E1202 15:23:10.924704    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-b6pzr" podUID="269a2c1f-95ff-4df0-9a9d-7180a420df00"
	Dec 02 15:23:16 functional-031973 kubelet[4768]: E1202 15:23:16.924717    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-ljrh9" podUID="e67844aa-4a0f-4537-a2b2-6900a351107b"
	Dec 02 15:23:19 functional-031973 kubelet[4768]: E1202 15:23:19.923483    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="038e8210-22af-4586-bb52-4b0ff00eb7be"
	Dec 02 15:23:19 functional-031973 kubelet[4768]: E1202 15:23:19.924358    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-wk9xg" podUID="7267d7f8-815a-4d11-9486-1d841
04bf640"
	
	
	==> storage-provisioner [85c6d8bf722dcb136812c6f14c45b5d380b1de637a1b3615b9d1d2b7fb98940c] <==
	W1202 15:16:10.586263       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1202 15:16:10.586468       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1202 15:16:10.586687       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-031973_397fa3ca-6b00-4900-82e0-268f547da5e7!
	I1202 15:16:10.586920       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"06f31c89-8177-4473-aea9-89a84ed0b889", APIVersion:"v1", ResourceVersion:"446", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-031973_397fa3ca-6b00-4900-82e0-268f547da5e7 became leader
	W1202 15:16:10.589335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:16:10.592545       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1202 15:16:10.687378       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-031973_397fa3ca-6b00-4900-82e0-268f547da5e7!
	W1202 15:16:12.596560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:16:12.601329       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:16:14.604965       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:16:14.614131       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:16:16.617205       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:16:16.621029       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:16:18.624813       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:16:18.630072       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:16:20.633409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:16:20.637710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:16:22.641582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:16:22.646626       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:16:24.650290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:16:24.655142       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:16:26.658929       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:16:26.664778       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:16:28.668074       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:16:28.672710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [ef1e68dc2307c5daf3aa5cdb63ca8b1bb338e7f8dfd850d51a666ac3747a2970] <==
	W1202 15:22:59.627515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:23:01.630732       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:23:01.636049       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:23:03.639754       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:23:03.643800       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:23:05.647688       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:23:05.652996       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:23:07.655857       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:23:07.659879       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:23:09.663701       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:23:09.667714       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:23:11.670963       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:23:11.675106       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:23:13.678559       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:23:13.683862       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:23:15.687481       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:23:15.691548       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:23:17.695298       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:23:17.699316       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:23:19.702360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:23:19.706101       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:23:21.709727       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:23:21.716207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:23:23.719869       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:23:23.724532       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-031973 -n functional-031973
helpers_test.go:269: (dbg) Run:  kubectl --context functional-031973 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount mysql-5bb876957f-ljrh9 sp-pod dashboard-metrics-scraper-77bf4d6c4c-wk9xg kubernetes-dashboard-855c9754f9-b6pzr
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-031973 describe pod busybox-mount mysql-5bb876957f-ljrh9 sp-pod dashboard-metrics-scraper-77bf4d6c4c-wk9xg kubernetes-dashboard-855c9754f9-b6pzr
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-031973 describe pod busybox-mount mysql-5bb876957f-ljrh9 sp-pod dashboard-metrics-scraper-77bf4d6c4c-wk9xg kubernetes-dashboard-855c9754f9-b6pzr: exit status 1 (81.64409ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-031973/192.168.49.2
	Start Time:       Tue, 02 Dec 2025 15:17:29 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  containerd://ad7eaef8b35d60e1dce92546738e84d4c79a3cf6d207f3f4a48a68cfd880b1ae
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Tue, 02 Dec 2025 15:17:32 +0000
	      Finished:     Tue, 02 Dec 2025 15:17:32 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x6jsv (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-x6jsv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m55s  default-scheduler  Successfully assigned default/busybox-mount to functional-031973
	  Normal  Pulling    5m55s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m53s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.085s (2.085s including waiting). Image size: 2395207 bytes.
	  Normal  Created    5m52s  kubelet            Created container: mount-munger
	  Normal  Started    5m52s  kubelet            Started container mount-munger
	
	
	Name:             mysql-5bb876957f-ljrh9
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-031973/192.168.49.2
	Start Time:       Tue, 02 Dec 2025 15:17:39 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:           10.244.0.11
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gk92d (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-gk92d:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  5m45s                  default-scheduler  Successfully assigned default/mysql-5bb876957f-ljrh9 to functional-031973
	  Normal   Pulling    2m31s (x5 over 5m45s)  kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     2m28s (x5 over 5m42s)  kubelet            Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   2m28s (x5 over 5m42s)  kubelet  Error: ErrImagePull
	  Warning  Failed   33s (x20 over 5m41s)   kubelet  Error: ImagePullBackOff
	  Normal   BackOff  21s (x21 over 5m41s)   kubelet  Back-off pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-031973/192.168.49.2
	Start Time:       Tue, 02 Dec 2025 15:17:22 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-slhv5 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-slhv5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  6m2s                default-scheduler  Successfully assigned default/sp-pod to functional-031973
	  Warning  Failed     4m36s (x4 over 6m)  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling  3m8s (x5 over 6m2s)  kubelet  Pulling image "docker.io/nginx"
	  Warning  Failed   3m5s (x5 over 6m)    kubelet  Error: ErrImagePull
	  Warning  Failed   3m5s                 kubelet  Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff  46s (x21 over 5m59s)  kubelet  Back-off pulling image "docker.io/nginx"
	  Warning  Failed   46s (x21 over 5m59s)  kubelet  Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-wk9xg" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-b6pzr" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-031973 describe pod busybox-mount mysql-5bb876957f-ljrh9 sp-pod dashboard-metrics-scraper-77bf4d6c4c-wk9xg kubernetes-dashboard-855c9754f9-b6pzr: exit status 1
E1202 15:26:48.210731  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/addons-371602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (369.10s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (602.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-031973 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-ljrh9" [e67844aa-4a0f-4537-a2b2-6900a351107b] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
functional_test.go:1804: ***** TestFunctional/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1804: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-031973 -n functional-031973
functional_test.go:1804: TestFunctional/parallel/MySQL: showing logs for failed pods as of 2025-12-02 15:27:39.493389674 +0000 UTC m=+1096.856283169
functional_test.go:1804: (dbg) Run:  kubectl --context functional-031973 describe po mysql-5bb876957f-ljrh9 -n default
functional_test.go:1804: (dbg) kubectl --context functional-031973 describe po mysql-5bb876957f-ljrh9 -n default:
Name:             mysql-5bb876957f-ljrh9
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-031973/192.168.49.2
Start Time:       Tue, 02 Dec 2025 15:17:39 +0000
Labels:           app=mysql
pod-template-hash=5bb876957f
Annotations:      <none>
Status:           Pending
IP:               10.244.0.11
IPs:
IP:           10.244.0.11
Controlled By:  ReplicaSet/mysql-5bb876957f
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP (mysql)
Host Port:      0/TCP (mysql)
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gk92d (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-gk92d:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/mysql-5bb876957f-ljrh9 to functional-031973
Normal   Pulling    6m46s (x5 over 10m)    kubelet            Pulling image "docker.io/mysql:5.7"
Warning  Failed     6m43s (x5 over 9m57s)  kubelet            Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests
toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed   6m43s (x5 over 9m57s)   kubelet  Error: ErrImagePull
Warning  Failed   4m48s (x20 over 9m56s)  kubelet  Error: ImagePullBackOff
Normal   BackOff  4m36s (x21 over 9m56s)  kubelet  Back-off pulling image "docker.io/mysql:5.7"
functional_test.go:1804: (dbg) Run:  kubectl --context functional-031973 logs mysql-5bb876957f-ljrh9 -n default
functional_test.go:1804: (dbg) Non-zero exit: kubectl --context functional-031973 logs mysql-5bb876957f-ljrh9 -n default: exit status 1 (70.931559ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-5bb876957f-ljrh9" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1804: kubectl --context functional-031973 logs mysql-5bb876957f-ljrh9 -n default: exit status 1
functional_test.go:1806: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/MySQL]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/MySQL]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-031973
helpers_test.go:243: (dbg) docker inspect functional-031973:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8e6415af0faf450563a814c4320c99a3fdffa7cb0ee3328d6db07a2fba5353e3",
	        "Created": "2025-12-02T15:15:37.382465049Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 437199,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T15:15:37.417630105Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/8e6415af0faf450563a814c4320c99a3fdffa7cb0ee3328d6db07a2fba5353e3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8e6415af0faf450563a814c4320c99a3fdffa7cb0ee3328d6db07a2fba5353e3/hostname",
	        "HostsPath": "/var/lib/docker/containers/8e6415af0faf450563a814c4320c99a3fdffa7cb0ee3328d6db07a2fba5353e3/hosts",
	        "LogPath": "/var/lib/docker/containers/8e6415af0faf450563a814c4320c99a3fdffa7cb0ee3328d6db07a2fba5353e3/8e6415af0faf450563a814c4320c99a3fdffa7cb0ee3328d6db07a2fba5353e3-json.log",
	        "Name": "/functional-031973",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-031973:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-031973",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8e6415af0faf450563a814c4320c99a3fdffa7cb0ee3328d6db07a2fba5353e3",
	                "LowerDir": "/var/lib/docker/overlay2/ff8e501cc39f97b2264b9620db8c3575efd7e10f0796e3fc558490e7b693b56b-init/diff:/var/lib/docker/overlay2/b24a03799b584404f04c044a7327612eb3ab66b1330d1bf57134456e5f41230d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ff8e501cc39f97b2264b9620db8c3575efd7e10f0796e3fc558490e7b693b56b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ff8e501cc39f97b2264b9620db8c3575efd7e10f0796e3fc558490e7b693b56b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ff8e501cc39f97b2264b9620db8c3575efd7e10f0796e3fc558490e7b693b56b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-031973",
	                "Source": "/var/lib/docker/volumes/functional-031973/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-031973",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-031973",
	                "name.minikube.sigs.k8s.io": "functional-031973",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "5c60273079da1f9d4e348ddcae81f0a2346ec733b5680c77eb71ba260385fd94",
	            "SandboxKey": "/var/run/docker/netns/5c60273079da",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33165"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33166"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33169"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33167"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33168"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-031973": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "072e297832857b662108017b58f1caabb1f529b2dbb839e022eeb4c01cc96da4",
	                    "EndpointID": "60b5bde8cb58337b502aeac0f46839fc2f8c145ed5188498e6f8715b9c69a2f9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "92:85:11:0a:bb:d6",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-031973",
	                        "8e6415af0faf"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-031973 -n functional-031973
helpers_test.go:252: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-031973 logs -n 25: (1.312890558s)
helpers_test.go:260: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                              ARGS                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-031973 image rm kicbase/echo-server:functional-031973 --alsologtostderr                                              │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ image          │ functional-031973 image ls                                                                                                      │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ ssh            │ functional-031973 ssh findmnt -T /mount1                                                                                        │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ image          │ functional-031973 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ ssh            │ functional-031973 ssh findmnt -T /mount2                                                                                        │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ image          │ functional-031973 image ls                                                                                                      │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ ssh            │ functional-031973 ssh findmnt -T /mount3                                                                                        │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ image          │ functional-031973 image save --daemon kicbase/echo-server:functional-031973 --alsologtostderr                                   │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ mount          │ -p functional-031973 --kill=true                                                                                                │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │                     │
	│ cp             │ functional-031973 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                              │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ ssh            │ functional-031973 ssh -n functional-031973 sudo cat /home/docker/cp-test.txt                                                    │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ cp             │ functional-031973 cp functional-031973:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2576747610/001/cp-test.txt      │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ ssh            │ functional-031973 ssh -n functional-031973 sudo cat /home/docker/cp-test.txt                                                    │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ cp             │ functional-031973 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                       │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ ssh            │ functional-031973 ssh -n functional-031973 sudo cat /tmp/does/not/exist/cp-test.txt                                             │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ update-context │ functional-031973 update-context --alsologtostderr -v=2                                                                         │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ update-context │ functional-031973 update-context --alsologtostderr -v=2                                                                         │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ update-context │ functional-031973 update-context --alsologtostderr -v=2                                                                         │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ image          │ functional-031973 image ls --format short --alsologtostderr                                                                     │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ image          │ functional-031973 image ls --format yaml --alsologtostderr                                                                      │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ ssh            │ functional-031973 ssh pgrep buildkitd                                                                                           │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │                     │
	│ image          │ functional-031973 image build -t localhost/my-image:functional-031973 testdata/build --alsologtostderr                          │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ image          │ functional-031973 image ls                                                                                                      │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ image          │ functional-031973 image ls --format json --alsologtostderr                                                                      │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	│ image          │ functional-031973 image ls --format table --alsologtostderr                                                                     │ functional-031973 │ jenkins │ v1.37.0 │ 02 Dec 25 15:17 UTC │ 02 Dec 25 15:17 UTC │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 15:17:28
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 15:17:28.857802  449198 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:17:28.858147  449198 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:17:28.858159  449198 out.go:374] Setting ErrFile to fd 2...
	I1202 15:17:28.858167  449198 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:17:28.858525  449198 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-403182/.minikube/bin
	I1202 15:17:28.859129  449198 out.go:368] Setting JSON to false
	I1202 15:17:28.860514  449198 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":7191,"bootTime":1764681458,"procs":245,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 15:17:28.860594  449198 start.go:143] virtualization: kvm guest
	I1202 15:17:28.862751  449198 out.go:179] * [functional-031973] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 15:17:28.864828  449198 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 15:17:28.864854  449198 notify.go:221] Checking for updates...
	I1202 15:17:28.867583  449198 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 15:17:28.868765  449198 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-403182/kubeconfig
	I1202 15:17:28.870531  449198 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-403182/.minikube
	I1202 15:17:28.871999  449198 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 15:17:28.873798  449198 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 15:17:28.875560  449198 config.go:182] Loaded profile config "functional-031973": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1202 15:17:28.876221  449198 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 15:17:28.903402  449198 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 15:17:28.903623  449198 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 15:17:28.974207  449198 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-02 15:17:28.961998728 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 15:17:28.974366  449198 docker.go:319] overlay module found
	I1202 15:17:28.977298  449198 out.go:179] * Using the docker driver based on existing profile
	I1202 15:17:28.978780  449198 start.go:309] selected driver: docker
	I1202 15:17:28.978801  449198 start.go:927] validating driver "docker" against &{Name:functional-031973 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-031973 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 15:17:28.978924  449198 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 15:17:28.979041  449198 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 15:17:29.050244  449198 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-02 15:17:29.038869576 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 15:17:29.050908  449198 cni.go:84] Creating CNI manager for ""
	I1202 15:17:29.051007  449198 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 15:17:29.051074  449198 start.go:353] cluster config:
	{Name:functional-031973 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-031973 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 15:17:29.054245  449198 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	ad7eaef8b35d6       56cc512116c8f       10 minutes ago      Exited              mount-munger              0                   46052ac8f3adc       busybox-mount                               default
	4ab64fdb4167c       d4918ca78576a       10 minutes ago      Running             nginx                     0                   9638d534f9cc1       nginx-svc                                   default
	ea3aa607d0865       9056ab77afb8e       10 minutes ago      Running             echo-server               0                   38683d7f68a4a       hello-node-75c85bcc94-8dm24                 default
	ae1aa2afadc73       9056ab77afb8e       10 minutes ago      Running             echo-server               0                   7454f7d21eb41       hello-node-connect-7d85dfc575-hncff         default
	c337850ffec5c       01e8bacf0f500       10 minutes ago      Running             kube-controller-manager   2                   de46fe6f6caf9       kube-controller-manager-functional-031973   kube-system
	cadd1246401e2       a5f569d49a979       10 minutes ago      Running             kube-apiserver            0                   2adea7d3feb75       kube-apiserver-functional-031973            kube-system
	646c1b88c2291       a3e246e9556e9       10 minutes ago      Running             etcd                      1                   03c0f3cc2c9a0       etcd-functional-031973                      kube-system
	569b0e142e127       8aa150647e88a       11 minutes ago      Running             kube-proxy                1                   bbc9d3c7d3116       kube-proxy-zpxn7                            kube-system
	06c1524f89b8b       01e8bacf0f500       11 minutes ago      Exited              kube-controller-manager   1                   de46fe6f6caf9       kube-controller-manager-functional-031973   kube-system
	c8fa8295f09d0       88320b5498ff2       11 minutes ago      Running             kube-scheduler            1                   f9c0c5bed4df1       kube-scheduler-functional-031973            kube-system
	b48e38bef4b45       52546a367cc9e       11 minutes ago      Running             coredns                   1                   006f7e1e9e593       coredns-66bc5c9577-b94tb                    kube-system
	19f21b5e8580d       409467f978b4a       11 minutes ago      Running             kindnet-cni               1                   b914ab31d57c5       kindnet-z4gbw                               kube-system
	ef1e68dc2307c       6e38f40d628db       11 minutes ago      Running             storage-provisioner       1                   48df44372fd3e       storage-provisioner                         kube-system
	85c6d8bf722dc       6e38f40d628db       11 minutes ago      Exited              storage-provisioner       0                   48df44372fd3e       storage-provisioner                         kube-system
	84288ecd238d1       52546a367cc9e       11 minutes ago      Exited              coredns                   0                   006f7e1e9e593       coredns-66bc5c9577-b94tb                    kube-system
	8db4008180d89       409467f978b4a       11 minutes ago      Exited              kindnet-cni               0                   b914ab31d57c5       kindnet-z4gbw                               kube-system
	c2d18e41fc203       8aa150647e88a       11 minutes ago      Exited              kube-proxy                0                   bbc9d3c7d3116       kube-proxy-zpxn7                            kube-system
	3850d6885c4a8       88320b5498ff2       11 minutes ago      Exited              kube-scheduler            0                   f9c0c5bed4df1       kube-scheduler-functional-031973            kube-system
	7a94f88d6c9f0       a3e246e9556e9       11 minutes ago      Exited              etcd                      0                   03c0f3cc2c9a0       etcd-functional-031973                      kube-system
	
	
	==> containerd <==
	Dec 02 15:22:29 functional-031973 containerd[3792]: time="2025-12-02T15:22:29.902816399Z" level=info msg="container event discarded" container=46052ac8f3adcd68ecfaf90d766201ffb3debad7351ccda45734ca06392e5c13 type=CONTAINER_STARTED_EVENT
	Dec 02 15:22:30 functional-031973 containerd[3792]: time="2025-12-02T15:22:30.386699943Z" level=info msg="container event discarded" container=f32572dc4817cfef4dfa962257d762112dda58fa4d151cccbefafd17b6a48bbe type=CONTAINER_CREATED_EVENT
	Dec 02 15:22:30 functional-031973 containerd[3792]: time="2025-12-02T15:22:30.386749637Z" level=info msg="container event discarded" container=f32572dc4817cfef4dfa962257d762112dda58fa4d151cccbefafd17b6a48bbe type=CONTAINER_STARTED_EVENT
	Dec 02 15:22:30 functional-031973 containerd[3792]: time="2025-12-02T15:22:30.406719754Z" level=info msg="container event discarded" container=336f2317270d813565cfc63d9e836f5fcb26d9d718283f98fac57293070f0a14 type=CONTAINER_CREATED_EVENT
	Dec 02 15:22:30 functional-031973 containerd[3792]: time="2025-12-02T15:22:30.406776839Z" level=info msg="container event discarded" container=336f2317270d813565cfc63d9e836f5fcb26d9d718283f98fac57293070f0a14 type=CONTAINER_STARTED_EVENT
	Dec 02 15:22:32 functional-031973 containerd[3792]: time="2025-12-02T15:22:32.012449104Z" level=info msg="container event discarded" container=ad7eaef8b35d60e1dce92546738e84d4c79a3cf6d207f3f4a48a68cfd880b1ae type=CONTAINER_CREATED_EVENT
	Dec 02 15:22:32 functional-031973 containerd[3792]: time="2025-12-02T15:22:32.078856331Z" level=info msg="container event discarded" container=ad7eaef8b35d60e1dce92546738e84d4c79a3cf6d207f3f4a48a68cfd880b1ae type=CONTAINER_STARTED_EVENT
	Dec 02 15:22:32 functional-031973 containerd[3792]: time="2025-12-02T15:22:32.238945913Z" level=info msg="container event discarded" container=ad7eaef8b35d60e1dce92546738e84d4c79a3cf6d207f3f4a48a68cfd880b1ae type=CONTAINER_STOPPED_EVENT
	Dec 02 15:22:34 functional-031973 containerd[3792]: time="2025-12-02T15:22:34.200856807Z" level=info msg="container event discarded" container=46052ac8f3adcd68ecfaf90d766201ffb3debad7351ccda45734ca06392e5c13 type=CONTAINER_STOPPED_EVENT
	Dec 02 15:22:39 functional-031973 containerd[3792]: time="2025-12-02T15:22:39.594621488Z" level=info msg="container event discarded" container=51b1724cf93e14593add8d21fa5b3719ed852650e57cfdda29df6946161328b6 type=CONTAINER_CREATED_EVENT
	Dec 02 15:22:39 functional-031973 containerd[3792]: time="2025-12-02T15:22:39.594706786Z" level=info msg="container event discarded" container=51b1724cf93e14593add8d21fa5b3719ed852650e57cfdda29df6946161328b6 type=CONTAINER_STARTED_EVENT
	Dec 02 15:22:47 functional-031973 containerd[3792]: time="2025-12-02T15:22:47.946455686Z" level=info msg="container event discarded" container=9cfeb97bab0c34c1f07b51fcaa1b2f35df812739d282befec416a6728b6af663 type=CONTAINER_DELETED_EVENT
	Dec 02 15:22:47 functional-031973 containerd[3792]: time="2025-12-02T15:22:47.946561131Z" level=info msg="container event discarded" container=325c273b19ce3626ee1377f7cbb1bb57de4b739c3413425a92dcdd79c186257e type=CONTAINER_DELETED_EVENT
	Dec 02 15:23:05 functional-031973 containerd[3792]: time="2025-12-02T15:23:05.924680753Z" level=info msg="PullImage \"docker.io/nginx:latest\""
	Dec 02 15:23:08 functional-031973 containerd[3792]: time="2025-12-02T15:23:08.552338443Z" level=error msg="PullImage \"docker.io/nginx:latest\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541: 429 Too Many Requests\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 02 15:23:08 functional-031973 containerd[3792]: time="2025-12-02T15:23:08.552424182Z" level=info msg="stop pulling image docker.io/library/nginx:latest: active requests=0, bytes read=21196"
	Dec 02 15:23:22 functional-031973 containerd[3792]: time="2025-12-02T15:23:22.925135659Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Dec 02 15:23:25 functional-031973 containerd[3792]: time="2025-12-02T15:23:25.176907909Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 02 15:23:25 functional-031973 containerd[3792]: time="2025-12-02T15:23:25.176984524Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11014"
	Dec 02 15:23:31 functional-031973 containerd[3792]: time="2025-12-02T15:23:31.925040833Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Dec 02 15:23:34 functional-031973 containerd[3792]: time="2025-12-02T15:23:34.160917357Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 02 15:23:34 functional-031973 containerd[3792]: time="2025-12-02T15:23:34.160967031Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11047"
	Dec 02 15:23:43 functional-031973 containerd[3792]: time="2025-12-02T15:23:43.925262930Z" level=info msg="PullImage \"docker.io/mysql:5.7\""
	Dec 02 15:23:46 functional-031973 containerd[3792]: time="2025-12-02T15:23:46.172062777Z" level=error msg="PullImage \"docker.io/mysql:5.7\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 02 15:23:46 functional-031973 containerd[3792]: time="2025-12-02T15:23:46.172135910Z" level=info msg="stop pulling image docker.io/library/mysql:5.7: active requests=0, bytes read=10967"
	
	
	==> coredns [84288ecd238d1ae9a22d0f967cce2f858ff120a649bf4bb1ed143ac2e88eae81] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46439 - 44210 "HINFO IN 7161019882344419339.5475944101483733167. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.0705958s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b48e38bef4b45aff4e9fe11ebb9238a1fa36a1eb7ac89a19b04a3c28f80f0997] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49535 - 17750 "HINFO IN 3721344718356208668.6506403759620066385. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.026959352s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               functional-031973
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-031973
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689
	                    minikube.k8s.io/name=functional-031973
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T15_15_54_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 15:15:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-031973
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 15:27:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 15:26:01 +0000   Tue, 02 Dec 2025 15:15:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 15:26:01 +0000   Tue, 02 Dec 2025 15:15:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 15:26:01 +0000   Tue, 02 Dec 2025 15:15:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 15:26:01 +0000   Tue, 02 Dec 2025 15:16:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-031973
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                120a04eb-b735-4dd2-a8e6-bf3b871cface
	  Boot ID:                    54b7568c-9bf9-47f9-8d68-e36a3a33af00
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-8dm24                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-hncff           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-ljrh9                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-b94tb                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-functional-031973                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-z4gbw                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-functional-031973              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-031973     200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-zpxn7                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-031973              100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-wk9xg    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-b6pzr         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-031973 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-031973 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node functional-031973 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m                kubelet          Node functional-031973 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    11m                kubelet          Node functional-031973 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m                kubelet          Node functional-031973 status is now: NodeHasSufficientPID
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           11m                node-controller  Node functional-031973 event: Registered Node functional-031973 in Controller
	  Normal  NodeReady                11m                kubelet          Node functional-031973 status is now: NodeReady
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-031973 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-031973 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node functional-031973 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-031973 event: Registered Node functional-031973 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff c2 f5 ca ac 67 17 08 06
	[ +13.571564] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 56 96 e2 dd 40 21 08 06
	[  +0.000361] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff c2 f5 ca ac 67 17 08 06
	[  +2.699615] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 7e 77 c0 d8 ea 13 08 06
	[Dec 2 14:52] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 06 3c f9 c8 55 0b 08 06
	[  +0.118748] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 02 d8 9f 3f ef 99 08 06
	[  +0.856727] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b2 fb 9f 63 58 4b 08 06
	[ +14.974602] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff aa c3 c5 ff a1 a9 08 06
	[  +0.000340] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 7e 77 c0 d8 ea 13 08 06
	[  +2.666742] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 5e 20 e4 1d 98 08 06
	[  +0.000370] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 fb 9f 63 58 4b 08 06
	[ +24.223711] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 09 24 19 b9 42 08 06
	[  +0.000349] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 02 d8 9f 3f ef 99 08 06
	
	
	==> etcd [646c1b88c2291cb75cfbfa0d6acbe8c8f6efeb9548850bda8083a0da895f1895] <==
	{"level":"warn","ts":"2025-12-02T15:16:49.406190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.412902Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.419629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.426243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.432784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.439580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.448532Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.458893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.466153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.473143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.480092Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.486754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.494019Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.503161Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.511195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.518025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.534149Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.540735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.558898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.565791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.573609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:16:49.627411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55162","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-02T15:26:49.089644Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1234}
	{"level":"info","ts":"2025-12-02T15:26:49.110149Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1234,"took":"20.120294ms","hash":1857196744,"current-db-size-bytes":3670016,"current-db-size":"3.7 MB","current-db-size-in-use-bytes":1785856,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-12-02T15:26:49.110203Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1857196744,"revision":1234,"compact-revision":-1}
	
	
	==> etcd [7a94f88d6c9f001c713f28d38be30c8b80117dc154363dfccf439f82d547fabb] <==
	{"level":"warn","ts":"2025-12-02T15:15:50.393142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:15:50.400004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:15:50.407216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:15:50.429152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:15:50.436768Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:15:50.448340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:15:50.489896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50438","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-02T15:16:45.855107Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-02T15:16:45.855196Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-031973","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-12-02T15:16:45.855325Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-02T15:16:45.857011Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-02T15:16:45.858427Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-02T15:16:45.858500Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-12-02T15:16:45.858534Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-02T15:16:45.858519Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-02T15:16:45.858525Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-12-02T15:16:45.858554Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-02T15:16:45.858567Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-02T15:16:45.858575Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-02T15:16:45.858583Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-12-02T15:16:45.858586Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-02T15:16:45.860615Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-12-02T15:16:45.860709Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-02T15:16:45.860741Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-12-02T15:16:45.860752Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-031973","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 15:27:40 up  2:10,  0 user,  load average: 0.19, 0.28, 0.63
	Linux functional-031973 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [19f21b5e8580d8f28a81006ac30e2cb2f04cbd5dcb33e97d6895451934417eeb] <==
	I1202 15:25:36.945576       1 main.go:301] handling current node
	I1202 15:25:46.940545       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:25:46.940613       1 main.go:301] handling current node
	I1202 15:25:56.941515       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:25:56.941565       1 main.go:301] handling current node
	I1202 15:26:06.941881       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:26:06.941994       1 main.go:301] handling current node
	I1202 15:26:16.941067       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:26:16.941103       1 main.go:301] handling current node
	I1202 15:26:26.941052       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:26:26.941085       1 main.go:301] handling current node
	I1202 15:26:36.941359       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:26:36.941442       1 main.go:301] handling current node
	I1202 15:26:46.941379       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:26:46.941418       1 main.go:301] handling current node
	I1202 15:26:56.940799       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:26:56.940844       1 main.go:301] handling current node
	I1202 15:27:06.942004       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:27:06.942059       1 main.go:301] handling current node
	I1202 15:27:16.940963       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:27:16.941010       1 main.go:301] handling current node
	I1202 15:27:26.941517       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:27:26.941556       1 main.go:301] handling current node
	I1202 15:27:36.941096       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:27:36.941141       1 main.go:301] handling current node
	
	
	==> kindnet [8db4008180d89c313b691c3ffc28ed67067eecede802fad652ac37fd6fd36acd] <==
	I1202 15:15:59.740136       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1202 15:15:59.740462       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1202 15:15:59.740638       1 main.go:148] setting mtu 1500 for CNI 
	I1202 15:15:59.740656       1 main.go:178] kindnetd IP family: "ipv4"
	I1202 15:15:59.740718       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-02T15:15:59Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1202 15:15:59.942159       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1202 15:15:59.942193       1 controller.go:381] "Waiting for informer caches to sync"
	I1202 15:15:59.942205       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1202 15:15:59.942356       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1202 15:16:00.448425       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1202 15:16:00.448465       1 metrics.go:72] Registering metrics
	I1202 15:16:00.448562       1 controller.go:711] "Syncing nftables rules"
	I1202 15:16:09.943353       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:16:09.943468       1 main.go:301] handling current node
	I1202 15:16:19.947849       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:16:19.947892       1 main.go:301] handling current node
	I1202 15:16:29.945800       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:16:29.945842       1 main.go:301] handling current node
	
	
	==> kube-apiserver [cadd1246401e2709608c00abb7ba9788bcb8e60e9c15eb367d9009429020ba0f] <==
	I1202 15:16:50.088162       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1202 15:16:50.088208       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1202 15:16:50.092638       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1202 15:16:50.097887       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1202 15:16:50.111508       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 15:16:50.123577       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1202 15:16:50.127784       1 controller.go:667] quota admission added evaluator for: endpoints
	I1202 15:16:50.926982       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1202 15:16:50.991403       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1202 15:16:51.297655       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1202 15:16:51.303790       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1202 15:16:51.778956       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1202 15:16:51.875708       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1202 15:16:51.933585       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1202 15:16:51.941645       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1202 15:16:53.753031       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1202 15:17:10.530696       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.106.2.222"}
	I1202 15:17:15.965948       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.99.119.76"}
	I1202 15:17:16.154741       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.104.98.205"}
	I1202 15:17:16.202184       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.109.87.195"}
	I1202 15:17:29.825060       1 controller.go:667] quota admission added evaluator for: namespaces
	I1202 15:17:29.960687       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.153.94"}
	I1202 15:17:29.973650       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.6.186"}
	I1202 15:17:39.107718       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.100.191.136"}
	I1202 15:26:50.016431       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [06c1524f89b8b1a6e7711d8cea9dec8e489ce09bfdb7e9eeadd318646ca74233] <==
	I1202 15:16:37.392510       1 serving.go:386] Generated self-signed cert in-memory
	I1202 15:16:38.150126       1 controllermanager.go:191] "Starting" version="v1.34.2"
	I1202 15:16:38.150155       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 15:16:38.151661       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1202 15:16:38.151685       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1202 15:16:38.152002       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1202 15:16:38.152052       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1202 15:16:48.154086       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-controller-manager [c337850ffec5cbfad547c07320b3343ad60dcf859bf98a12c95ac2636f334b66] <==
	I1202 15:16:53.453783       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1202 15:16:53.453829       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1202 15:16:53.453856       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1202 15:16:53.453874       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1202 15:16:53.453881       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1202 15:16:53.453920       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1202 15:16:53.454445       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1202 15:16:53.455458       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1202 15:16:53.455556       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1202 15:16:53.458938       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1202 15:16:53.461334       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 15:16:53.461352       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1202 15:16:53.461363       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1202 15:16:53.461628       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1202 15:16:53.463921       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1202 15:16:53.464062       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1202 15:16:53.468396       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1202 15:16:53.470743       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1202 15:17:29.890175       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:17:29.895423       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:17:29.901866       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:17:29.902541       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:17:29.908349       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:17:29.912406       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:17:29.913104       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [569b0e142e12766db902223ca7eb146be3849a69f3c33df418b36923d82a585a] <==
	I1202 15:16:36.597068       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1202 15:16:36.598151       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-031973&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1202 15:16:37.841108       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-031973&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1202 15:16:40.338703       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-031973&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1202 15:16:45.857370       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-031973&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1202 15:16:58.598080       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1202 15:16:58.598118       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1202 15:16:58.598231       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 15:16:58.621795       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 15:16:58.621848       1 server_linux.go:132] "Using iptables Proxier"
	I1202 15:16:58.627324       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 15:16:58.627586       1 server.go:527] "Version info" version="v1.34.2"
	I1202 15:16:58.627599       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 15:16:58.628806       1 config.go:200] "Starting service config controller"
	I1202 15:16:58.628830       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 15:16:58.628871       1 config.go:106] "Starting endpoint slice config controller"
	I1202 15:16:58.628889       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 15:16:58.628923       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 15:16:58.628876       1 config.go:309] "Starting node config controller"
	I1202 15:16:58.628957       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 15:16:58.628966       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 15:16:58.628972       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 15:16:58.729789       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1202 15:16:58.729915       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 15:16:58.729953       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [c2d18e41fc203eee96d6a09dfee77221ae299daef844af8d7758972f0d5eebd6] <==
	I1202 15:15:59.236263       1 server_linux.go:53] "Using iptables proxy"
	I1202 15:15:59.310604       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1202 15:15:59.411054       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1202 15:15:59.411094       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1202 15:15:59.411211       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 15:15:59.458243       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 15:15:59.458294       1 server_linux.go:132] "Using iptables Proxier"
	I1202 15:15:59.464059       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 15:15:59.464551       1 server.go:527] "Version info" version="v1.34.2"
	I1202 15:15:59.464588       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 15:15:59.466130       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 15:15:59.466142       1 config.go:309] "Starting node config controller"
	I1202 15:15:59.466157       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 15:15:59.466162       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 15:15:59.466183       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 15:15:59.466219       1 config.go:200] "Starting service config controller"
	I1202 15:15:59.466231       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 15:15:59.466205       1 config.go:106] "Starting endpoint slice config controller"
	I1202 15:15:59.466263       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 15:15:59.566412       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1202 15:15:59.566431       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 15:15:59.566517       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3850d6885c4a8427a31f9c1e3c8dfc49dde93cc3abd5127ae5b5e17c87485b87] <==
	E1202 15:15:50.912434       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1202 15:15:50.912471       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1202 15:15:50.912528       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1202 15:15:50.912901       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1202 15:15:50.912944       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1202 15:15:51.890107       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1202 15:15:51.920213       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1202 15:15:51.922190       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1202 15:15:51.923075       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1202 15:15:52.010654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1202 15:15:52.071843       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1202 15:15:52.119186       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1202 15:15:52.132528       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1202 15:15:52.144768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1202 15:15:52.154877       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1202 15:15:52.187203       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1202 15:15:52.196387       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1202 15:15:52.354774       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1202 15:15:54.408421       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 15:16:35.635194       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 15:16:35.635272       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1202 15:16:35.635369       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1202 15:16:35.635381       1 server.go:265] "[graceful-termination] secure server is exiting"
	I1202 15:16:35.635281       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E1202 15:16:35.635400       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c8fa8295f09d01cc139eda620db6d699a0081f04519fd714f09996c687592e9e] <==
	E1202 15:16:41.871504       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1202 15:16:41.883003       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1202 15:16:41.978092       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1202 15:16:42.223233       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1202 15:16:42.331606       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1202 15:16:44.153592       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1202 15:16:44.898779       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1202 15:16:45.096198       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1202 15:16:45.140830       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1202 15:16:45.280464       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1202 15:16:45.364422       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1202 15:16:45.426247       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1202 15:16:45.761211       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1202 15:16:45.954414       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1202 15:16:46.119488       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1202 15:16:46.263246       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1202 15:16:46.408248       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1202 15:16:46.563072       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1202 15:16:46.612942       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1202 15:16:47.095235       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1202 15:16:47.397805       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1202 15:16:47.457860       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1202 15:16:47.479575       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1202 15:16:47.828022       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1202 15:17:00.145159       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 02 15:26:19 functional-031973 kubelet[4768]: E1202 15:26:19.924337    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-b6pzr" podUID="269a2c1f-95ff-4df0-9a9d-7180a420df00"
	Dec 02 15:26:25 functional-031973 kubelet[4768]: E1202 15:26:25.924374    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="038e8210-22af-4586-bb52-4b0ff00eb7be"
	Dec 02 15:26:27 functional-031973 kubelet[4768]: E1202 15:26:27.924804    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-wk9xg" podUID="7267d7f8-815a-4d11-9486-1d841
04bf640"
	Dec 02 15:26:27 functional-031973 kubelet[4768]: E1202 15:26:27.924867    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-ljrh9" podUID="e67844aa-4a0f-4537-a2b2-6900a351107b"
	Dec 02 15:26:33 functional-031973 kubelet[4768]: E1202 15:26:33.924435    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-b6pzr" podUID="269a2c1f-95ff-4df0-9a9d-7180a420df00"
	Dec 02 15:26:38 functional-031973 kubelet[4768]: E1202 15:26:38.924435    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="038e8210-22af-4586-bb52-4b0ff00eb7be"
	Dec 02 15:26:41 functional-031973 kubelet[4768]: E1202 15:26:41.925510    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-wk9xg" podUID="7267d7f8-815a-4d11-9486-1d841
04bf640"
	Dec 02 15:26:42 functional-031973 kubelet[4768]: E1202 15:26:42.924111    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-ljrh9" podUID="e67844aa-4a0f-4537-a2b2-6900a351107b"
	Dec 02 15:26:48 functional-031973 kubelet[4768]: E1202 15:26:48.925164    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-b6pzr" podUID="269a2c1f-95ff-4df0-9a9d-7180a420df00"
	Dec 02 15:26:52 functional-031973 kubelet[4768]: E1202 15:26:52.924075    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="038e8210-22af-4586-bb52-4b0ff00eb7be"
	Dec 02 15:26:55 functional-031973 kubelet[4768]: E1202 15:26:55.925418    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-wk9xg" podUID="7267d7f8-815a-4d11-9486-1d841
04bf640"
	Dec 02 15:26:57 functional-031973 kubelet[4768]: E1202 15:26:57.925444    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-ljrh9" podUID="e67844aa-4a0f-4537-a2b2-6900a351107b"
	Dec 02 15:27:01 functional-031973 kubelet[4768]: E1202 15:27:01.924363    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-b6pzr" podUID="269a2c1f-95ff-4df0-9a9d-7180a420df00"
	Dec 02 15:27:06 functional-031973 kubelet[4768]: E1202 15:27:06.924791    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-wk9xg" podUID="7267d7f8-815a-4d11-9486-1d841
04bf640"
	Dec 02 15:27:07 functional-031973 kubelet[4768]: E1202 15:27:07.924795    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="038e8210-22af-4586-bb52-4b0ff00eb7be"
	Dec 02 15:27:11 functional-031973 kubelet[4768]: E1202 15:27:11.925234    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-ljrh9" podUID="e67844aa-4a0f-4537-a2b2-6900a351107b"
	Dec 02 15:27:13 functional-031973 kubelet[4768]: E1202 15:27:13.925131    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-b6pzr" podUID="269a2c1f-95ff-4df0-9a9d-7180a420df00"
	Dec 02 15:27:18 functional-031973 kubelet[4768]: E1202 15:27:18.923315    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="038e8210-22af-4586-bb52-4b0ff00eb7be"
	Dec 02 15:27:20 functional-031973 kubelet[4768]: E1202 15:27:20.924832    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-wk9xg" podUID="7267d7f8-815a-4d11-9486-1d841
04bf640"
	Dec 02 15:27:24 functional-031973 kubelet[4768]: E1202 15:27:24.925125    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-ljrh9" podUID="e67844aa-4a0f-4537-a2b2-6900a351107b"
	Dec 02 15:27:26 functional-031973 kubelet[4768]: E1202 15:27:26.924752    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-b6pzr" podUID="269a2c1f-95ff-4df0-9a9d-7180a420df00"
	Dec 02 15:27:32 functional-031973 kubelet[4768]: E1202 15:27:32.924396    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="038e8210-22af-4586-bb52-4b0ff00eb7be"
	Dec 02 15:27:35 functional-031973 kubelet[4768]: E1202 15:27:35.925006    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-wk9xg" podUID="7267d7f8-815a-4d11-9486-1d841
04bf640"
	Dec 02 15:27:35 functional-031973 kubelet[4768]: E1202 15:27:35.925012    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-ljrh9" podUID="e67844aa-4a0f-4537-a2b2-6900a351107b"
	Dec 02 15:27:39 functional-031973 kubelet[4768]: E1202 15:27:39.924622    4768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-b6pzr" podUID="269a2c1f-95ff-4df0-9a9d-7180a420df00"
	
	
	==> storage-provisioner [85c6d8bf722dcb136812c6f14c45b5d380b1de637a1b3615b9d1d2b7fb98940c] <==
	W1202 15:16:10.586263       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1202 15:16:10.586468       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1202 15:16:10.586687       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-031973_397fa3ca-6b00-4900-82e0-268f547da5e7!
	I1202 15:16:10.586920       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"06f31c89-8177-4473-aea9-89a84ed0b889", APIVersion:"v1", ResourceVersion:"446", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-031973_397fa3ca-6b00-4900-82e0-268f547da5e7 became leader
	W1202 15:16:10.589335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:16:10.592545       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1202 15:16:10.687378       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-031973_397fa3ca-6b00-4900-82e0-268f547da5e7!
	W1202 15:16:12.596560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:16:12.601329       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:16:14.604965       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:16:14.614131       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:16:16.617205       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:16:16.621029       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:16:18.624813       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:16:18.630072       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:16:20.633409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:16:20.637710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:16:22.641582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:16:22.646626       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:16:24.650290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:16:24.655142       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:16:26.658929       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:16:26.664778       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:16:28.668074       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:16:28.672710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [ef1e68dc2307c5daf3aa5cdb63ca8b1bb338e7f8dfd850d51a666ac3747a2970] <==
	W1202 15:27:16.665120       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:27:18.668014       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:27:18.672053       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:27:20.675842       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:27:20.680971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:27:22.684499       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:27:22.688469       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:27:24.692005       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:27:24.696182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:27:26.699924       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:27:26.704399       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:27:28.707757       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:27:28.713469       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:27:30.716814       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:27:30.722105       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:27:32.725491       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:27:32.729421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:27:34.732843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:27:34.736852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:27:36.740817       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:27:36.744898       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:27:38.748294       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:27:38.752197       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:27:40.755475       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:27:40.761087       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-031973 -n functional-031973
helpers_test.go:269: (dbg) Run:  kubectl --context functional-031973 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount mysql-5bb876957f-ljrh9 sp-pod dashboard-metrics-scraper-77bf4d6c4c-wk9xg kubernetes-dashboard-855c9754f9-b6pzr
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/MySQL]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-031973 describe pod busybox-mount mysql-5bb876957f-ljrh9 sp-pod dashboard-metrics-scraper-77bf4d6c4c-wk9xg kubernetes-dashboard-855c9754f9-b6pzr
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-031973 describe pod busybox-mount mysql-5bb876957f-ljrh9 sp-pod dashboard-metrics-scraper-77bf4d6c4c-wk9xg kubernetes-dashboard-855c9754f9-b6pzr: exit status 1 (83.212463ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-031973/192.168.49.2
	Start Time:       Tue, 02 Dec 2025 15:17:29 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  containerd://ad7eaef8b35d60e1dce92546738e84d4c79a3cf6d207f3f4a48a68cfd880b1ae
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Tue, 02 Dec 2025 15:17:32 +0000
	      Finished:     Tue, 02 Dec 2025 15:17:32 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x6jsv (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-x6jsv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-031973
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.085s (2.085s including waiting). Image size: 2395207 bytes.
	  Normal  Created    10m   kubelet            Created container: mount-munger
	  Normal  Started    10m   kubelet            Started container mount-munger
	
	
	Name:             mysql-5bb876957f-ljrh9
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-031973/192.168.49.2
	Start Time:       Tue, 02 Dec 2025 15:17:39 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:           10.244.0.11
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gk92d (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-gk92d:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/mysql-5bb876957f-ljrh9 to functional-031973
	  Normal   Pulling    6m48s (x5 over 10m)    kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     6m45s (x5 over 9m59s)  kubelet            Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   6m45s (x5 over 9m59s)   kubelet  Error: ErrImagePull
	  Warning  Failed   4m50s (x20 over 9m58s)  kubelet  Error: ImagePullBackOff
	  Normal   BackOff  4m38s (x21 over 9m58s)  kubelet  Back-off pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-031973/192.168.49.2
	Start Time:       Tue, 02 Dec 2025 15:17:22 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-slhv5 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-slhv5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/sp-pod to functional-031973
	  Warning  Failed     8m53s (x4 over 10m)  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling  7m25s (x5 over 10m)  kubelet  Pulling image "docker.io/nginx"
	  Warning  Failed   7m22s (x5 over 10m)  kubelet  Error: ErrImagePull
	  Warning  Failed   7m22s                kubelet  Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff  9s (x42 over 10m)  kubelet  Back-off pulling image "docker.io/nginx"
	  Warning  Failed   9s (x42 over 10m)  kubelet  Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-wk9xg" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-b6pzr" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-031973 describe pod busybox-mount mysql-5bb876957f-ljrh9 sp-pod dashboard-metrics-scraper-77bf4d6c4c-wk9xg kubernetes-dashboard-855c9754f9-b6pzr: exit status 1
--- FAIL: TestFunctional/parallel/MySQL (602.86s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (3.69s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-748804 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-748804 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-748804 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-748804 --alsologtostderr -v=1] stderr:
I1202 15:29:25.960438  472339 out.go:360] Setting OutFile to fd 1 ...
I1202 15:29:25.960763  472339 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:29:25.960774  472339 out.go:374] Setting ErrFile to fd 2...
I1202 15:29:25.960780  472339 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:29:25.961110  472339 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-403182/.minikube/bin
I1202 15:29:25.961460  472339 mustload.go:66] Loading cluster: functional-748804
I1202 15:29:25.961999  472339 config.go:182] Loaded profile config "functional-748804": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1202 15:29:25.962593  472339 cli_runner.go:164] Run: docker container inspect functional-748804 --format={{.State.Status}}
I1202 15:29:25.983925  472339 host.go:66] Checking if "functional-748804" exists ...
I1202 15:29:25.984300  472339 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1202 15:29:26.053027  472339 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-02 15:29:26.041872142 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1202 15:29:26.053218  472339 api_server.go:166] Checking apiserver status ...
I1202 15:29:26.053280  472339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1202 15:29:26.053332  472339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-748804
I1202 15:29:26.076625  472339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22021-403182/.minikube/machines/functional-748804/id_rsa Username:docker}
I1202 15:29:26.190159  472339 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5653/cgroup
W1202 15:29:26.202478  472339 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5653/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1202 15:29:26.202542  472339 ssh_runner.go:195] Run: ls
I1202 15:29:26.209131  472339 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I1202 15:29:26.215948  472339 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W1202 15:29:26.216081  472339 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1202 15:29:26.216332  472339 config.go:182] Loaded profile config "functional-748804": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1202 15:29:26.216355  472339 addons.go:70] Setting dashboard=true in profile "functional-748804"
I1202 15:29:26.216374  472339 addons.go:239] Setting addon dashboard=true in "functional-748804"
I1202 15:29:26.216415  472339 host.go:66] Checking if "functional-748804" exists ...
I1202 15:29:26.216964  472339 cli_runner.go:164] Run: docker container inspect functional-748804 --format={{.State.Status}}
I1202 15:29:26.242334  472339 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1202 15:29:26.243899  472339 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1202 15:29:26.245639  472339 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1202 15:29:26.245681  472339 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1202 15:29:26.245766  472339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-748804
I1202 15:29:26.267783  472339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22021-403182/.minikube/machines/functional-748804/id_rsa Username:docker}
I1202 15:29:26.384341  472339 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1202 15:29:26.384374  472339 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1202 15:29:26.400102  472339 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1202 15:29:26.400124  472339 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1202 15:29:26.415261  472339 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1202 15:29:26.415289  472339 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1202 15:29:26.431004  472339 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1202 15:29:26.431033  472339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1202 15:29:26.449381  472339 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
I1202 15:29:26.449413  472339 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1202 15:29:26.468162  472339 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1202 15:29:26.468208  472339 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1202 15:29:26.483678  472339 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1202 15:29:26.483704  472339 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1202 15:29:26.498196  472339 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1202 15:29:26.498226  472339 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1202 15:29:26.516506  472339 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1202 15:29:26.516532  472339 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1202 15:29:26.531456  472339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1202 15:29:27.035177  472339 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-748804 addons enable metrics-server

                                                
                                                
I1202 15:29:27.037071  472339 addons.go:202] Writing out "functional-748804" config to set dashboard=true...
W1202 15:29:27.037371  472339 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1202 15:29:27.038119  472339 kapi.go:59] client config for functional-748804: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22021-403182/.minikube/profiles/functional-748804/client.crt", KeyFile:"/home/jenkins/minikube-integration/22021-403182/.minikube/profiles/functional-748804/client.key", CAFile:"/home/jenkins/minikube-integration/22021-403182/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2815480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1202 15:29:27.038787  472339 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1202 15:29:27.038808  472339 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1202 15:29:27.038816  472339 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1202 15:29:27.038822  472339 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1202 15:29:27.038838  472339 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1202 15:29:27.047766  472339 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  e68af139-ebed-49a9-8fc9-6fb8791cfa5b 685 0 2025-12-02 15:29:26 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-12-02 15:29:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.101.234.80,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.101.234.80],IPFamilies:[IPv4],AllocateLoadBalance
rNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1202 15:29:27.047915  472339 out.go:285] * Launching proxy ...
* Launching proxy ...
I1202 15:29:27.048030  472339 dashboard.go:154] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-748804 proxy --port 36195]
I1202 15:29:27.048354  472339 dashboard.go:159] Waiting for kubectl to output host:port ...
I1202 15:29:27.097554  472339 out.go:203] 
W1202 15:29:27.098920  472339 out.go:285] X Exiting due to HOST_KUBECTL_PROXY: kubectl proxy: readByteWithTimeout: EOF
X Exiting due to HOST_KUBECTL_PROXY: kubectl proxy: readByteWithTimeout: EOF
W1202 15:29:27.098944  472339 out.go:285] * 
* 
W1202 15:29:27.103151  472339 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_dashboard_2f9e80c8c4dc47927ad6915561a20c5705c3b3b4_0.log               │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_dashboard_2f9e80c8c4dc47927ad6915561a20c5705c3b3b4_0.log               │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1202 15:29:27.104902  472339 out.go:203] 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-748804
helpers_test.go:243: (dbg) docker inspect functional-748804:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6222e78e642133b36889ac3edf0ea2d5042d373f1833e7703e046f5ed14e24ac",
	        "Created": "2025-12-02T15:27:45.585626306Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 460868,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T15:27:45.624629841Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/6222e78e642133b36889ac3edf0ea2d5042d373f1833e7703e046f5ed14e24ac/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6222e78e642133b36889ac3edf0ea2d5042d373f1833e7703e046f5ed14e24ac/hostname",
	        "HostsPath": "/var/lib/docker/containers/6222e78e642133b36889ac3edf0ea2d5042d373f1833e7703e046f5ed14e24ac/hosts",
	        "LogPath": "/var/lib/docker/containers/6222e78e642133b36889ac3edf0ea2d5042d373f1833e7703e046f5ed14e24ac/6222e78e642133b36889ac3edf0ea2d5042d373f1833e7703e046f5ed14e24ac-json.log",
	        "Name": "/functional-748804",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-748804:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-748804",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6222e78e642133b36889ac3edf0ea2d5042d373f1833e7703e046f5ed14e24ac",
	                "LowerDir": "/var/lib/docker/overlay2/671e9c5d889f651edf5f697f181fc8d047a0da384c7d026f1c7810abd59bc372-init/diff:/var/lib/docker/overlay2/b24a03799b584404f04c044a7327612eb3ab66b1330d1bf57134456e5f41230d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/671e9c5d889f651edf5f697f181fc8d047a0da384c7d026f1c7810abd59bc372/merged",
	                "UpperDir": "/var/lib/docker/overlay2/671e9c5d889f651edf5f697f181fc8d047a0da384c7d026f1c7810abd59bc372/diff",
	                "WorkDir": "/var/lib/docker/overlay2/671e9c5d889f651edf5f697f181fc8d047a0da384c7d026f1c7810abd59bc372/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-748804",
	                "Source": "/var/lib/docker/volumes/functional-748804/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-748804",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-748804",
	                "name.minikube.sigs.k8s.io": "functional-748804",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "ef0274eff8a62ef04c066c3fb70728d1f398f09a7f7467a4dc6d4783d563c894",
	            "SandboxKey": "/var/run/docker/netns/ef0274eff8a6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33170"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33171"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33174"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33172"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33173"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-748804": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "45059eff2ea7711cb8d23ef3b916e84c51ac669d185a5efdcdc6c56158ffc5eb",
	                    "EndpointID": "3dc2c96d7473c5c48b900489a532766ff181ebe724b42e6cc7869e9199bcb6a2",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "fa:b3:b6:48:c7:62",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-748804",
	                        "6222e78e6421"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-748804 -n functional-748804
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-748804 logs -n 25: (1.477275687s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                             ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl   │ functional-748804 kubectl -- --context functional-748804 get pods                                                                                            │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:28 UTC │ 02 Dec 25 15:28 UTC │
	│ start     │ -p functional-748804 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                     │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:28 UTC │ 02 Dec 25 15:29 UTC │
	│ service   │ invalid-svc -p functional-748804                                                                                                                             │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │                     │
	│ config    │ functional-748804 config unset cpus                                                                                                                          │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │ 02 Dec 25 15:29 UTC │
	│ cp        │ functional-748804 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                                                           │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │ 02 Dec 25 15:29 UTC │
	│ config    │ functional-748804 config get cpus                                                                                                                            │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │                     │
	│ config    │ functional-748804 config set cpus 2                                                                                                                          │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │ 02 Dec 25 15:29 UTC │
	│ config    │ functional-748804 config get cpus                                                                                                                            │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │                     │
	│ config    │ functional-748804 config unset cpus                                                                                                                          │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │ 02 Dec 25 15:29 UTC │
	│ ssh       │ functional-748804 ssh -n functional-748804 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │ 02 Dec 25 15:29 UTC │
	│ config    │ functional-748804 config get cpus                                                                                                                            │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │                     │
	│ mount     │ -p functional-748804 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo142638695/001:/mount-9p --alsologtostderr -v=1                        │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │                     │
	│ ssh       │ functional-748804 ssh findmnt -T /mount-9p | grep 9p                                                                                                         │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │                     │
	│ cp        │ functional-748804 cp functional-748804:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp1305142170/001/cp-test.txt │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │ 02 Dec 25 15:29 UTC │
	│ ssh       │ functional-748804 ssh -n functional-748804 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │ 02 Dec 25 15:29 UTC │
	│ cp        │ functional-748804 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                    │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │ 02 Dec 25 15:29 UTC │
	│ ssh       │ functional-748804 ssh findmnt -T /mount-9p | grep 9p                                                                                                         │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │ 02 Dec 25 15:29 UTC │
	│ ssh       │ functional-748804 ssh -- ls -la /mount-9p                                                                                                                    │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │ 02 Dec 25 15:29 UTC │
	│ ssh       │ functional-748804 ssh cat /mount-9p/test-1764689363706660173                                                                                                 │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │ 02 Dec 25 15:29 UTC │
	│ start     │ -p functional-748804 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0          │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │                     │
	│ start     │ -p functional-748804 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0          │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-748804 --alsologtostderr -v=1                                                                                               │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │                     │
	│ start     │ -p functional-748804 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                    │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │                     │
	│ ssh       │ functional-748804 ssh sudo systemctl is-active docker                                                                                                        │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │                     │
	│ ssh       │ functional-748804 ssh sudo systemctl is-active crio                                                                                                          │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │                     │
	└───────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 15:29:26
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 15:29:26.139222  472486 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:29:26.139339  472486 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:29:26.139346  472486 out.go:374] Setting ErrFile to fd 2...
	I1202 15:29:26.139351  472486 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:29:26.139620  472486 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-403182/.minikube/bin
	I1202 15:29:26.140110  472486 out.go:368] Setting JSON to false
	I1202 15:29:26.141235  472486 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":7908,"bootTime":1764681458,"procs":232,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 15:29:26.141317  472486 start.go:143] virtualization: kvm guest
	I1202 15:29:26.143611  472486 out.go:179] * [functional-748804] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 15:29:26.145431  472486 notify.go:221] Checking for updates...
	I1202 15:29:26.145475  472486 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 15:29:26.147028  472486 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 15:29:26.148571  472486 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-403182/kubeconfig
	I1202 15:29:26.151226  472486 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-403182/.minikube
	I1202 15:29:26.152690  472486 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 15:29:26.153938  472486 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 15:29:26.155740  472486 config.go:182] Loaded profile config "functional-748804": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 15:29:26.156423  472486 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 15:29:26.185109  472486 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 15:29:26.185249  472486 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 15:29:26.258186  472486 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-02 15:29:26.246627139 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 15:29:26.258363  472486 docker.go:319] overlay module found
	I1202 15:29:26.260367  472486 out.go:179] * Using the docker driver based on existing profile
	I1202 15:29:26.262125  472486 start.go:309] selected driver: docker
	I1202 15:29:26.262148  472486 start.go:927] validating driver "docker" against &{Name:functional-748804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-748804 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 15:29:26.262256  472486 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 15:29:26.262347  472486 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 15:29:26.323640  472486 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-02 15:29:26.3130954 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 15:29:26.324500  472486 cni.go:84] Creating CNI manager for ""
	I1202 15:29:26.324578  472486 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 15:29:26.324624  472486 start.go:353] cluster config:
	{Name:functional-748804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-748804 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 15:29:26.327384  472486 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	c26834edb0b97       9056ab77afb8e       2 seconds ago        Running             echo-server               0                   883b5dde25cab       hello-node-5758569b79-n9fw4                 default
	8c7bed2b3aa74       6e38f40d628db       24 seconds ago       Running             storage-provisioner       2                   7c72eb9f1b949       storage-provisioner                         kube-system
	3b2e144de157e       aa9d02839d8de       25 seconds ago       Running             kube-apiserver            0                   6c5ffab4d9199       kube-apiserver-functional-748804            kube-system
	5fd6717e2dd43       45f3cc72d235f       26 seconds ago       Running             kube-controller-manager   2                   e41f4d9e9066c       kube-controller-manager-functional-748804   kube-system
	133eb1abf8f5a       a3e246e9556e9       26 seconds ago       Running             etcd                      1                   71bf699e9dad7       etcd-functional-748804                      kube-system
	2b69351c8a77e       8a4ded35a3eb1       37 seconds ago       Running             kube-proxy                1                   2d0e156464bf6       kube-proxy-lcgn8                            kube-system
	c61260d974192       45f3cc72d235f       37 seconds ago       Exited              kube-controller-manager   1                   e41f4d9e9066c       kube-controller-manager-functional-748804   kube-system
	a119c7112144b       7bb6219ddab95       37 seconds ago       Running             kube-scheduler            1                   6e835981c9dcd       kube-scheduler-functional-748804            kube-system
	3e94dbe1375f9       6e38f40d628db       37 seconds ago       Exited              storage-provisioner       1                   7c72eb9f1b949       storage-provisioner                         kube-system
	467501febc847       aa5e3ebc0dfed       37 seconds ago       Running             coredns                   1                   6baefaaf14fbf       coredns-7d764666f9-hbkc9                    kube-system
	511bcd0a11b99       409467f978b4a       37 seconds ago       Running             kindnet-cni               1                   f5df6f632b2da       kindnet-mr459                               kube-system
	b35dbd9505dcb       aa5e3ebc0dfed       About a minute ago   Exited              coredns                   0                   6baefaaf14fbf       coredns-7d764666f9-hbkc9                    kube-system
	71dd3db26040e       409467f978b4a       About a minute ago   Exited              kindnet-cni               0                   f5df6f632b2da       kindnet-mr459                               kube-system
	cbf183f96ad45       8a4ded35a3eb1       About a minute ago   Exited              kube-proxy                0                   2d0e156464bf6       kube-proxy-lcgn8                            kube-system
	180ffdb115bcf       7bb6219ddab95       About a minute ago   Exited              kube-scheduler            0                   6e835981c9dcd       kube-scheduler-functional-748804            kube-system
	0fd3f9a9df703       a3e246e9556e9       About a minute ago   Exited              etcd                      0                   71bf699e9dad7       etcd-functional-748804                      kube-system
	
	
	==> containerd <==
	Dec 02 15:29:25 functional-748804 containerd[4479]: time="2025-12-02T15:29:25.584544720Z" level=info msg="StartContainer for \"c26834edb0b9752f911175180cbacd17d8dfa11f2d2bb3f17a7a6d3b93557924\""
	Dec 02 15:29:25 functional-748804 containerd[4479]: time="2025-12-02T15:29:25.585631997Z" level=info msg="connecting to shim c26834edb0b9752f911175180cbacd17d8dfa11f2d2bb3f17a7a6d3b93557924" address="unix:///run/containerd/s/f95003d5703e83da78676f6a91712f7a82ee26dee0349eb26c907baa8e1b99cb" protocol=ttrpc version=3
	Dec 02 15:29:25 functional-748804 containerd[4479]: time="2025-12-02T15:29:25.640923261Z" level=info msg="StartContainer for \"c26834edb0b9752f911175180cbacd17d8dfa11f2d2bb3f17a7a6d3b93557924\" returns successfully"
	Dec 02 15:29:26 functional-748804 containerd[4479]: time="2025-12-02T15:29:26.079636569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-mount,Uid:861ada58-f518-4422-acaf-49337872cb3c,Namespace:default,Attempt:0,}"
	Dec 02 15:29:26 functional-748804 containerd[4479]: time="2025-12-02T15:29:26.118558248Z" level=info msg="connecting to shim 8ef8320ca87483d9b7bb5eb66029463fbfe83e56fac1ed2ede7f785279b2b16e" address="unix:///run/containerd/s/bfaf823ea6a62a7fc3762b06e439e5bf35cf3985b3c72ef63b99e7099c7c07e6" namespace=k8s.io protocol=ttrpc version=3
	Dec 02 15:29:26 functional-748804 containerd[4479]: time="2025-12-02T15:29:26.200071523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-mount,Uid:861ada58-f518-4422-acaf-49337872cb3c,Namespace:default,Attempt:0,} returns sandbox id \"8ef8320ca87483d9b7bb5eb66029463fbfe83e56fac1ed2ede7f785279b2b16e\""
	Dec 02 15:29:26 functional-748804 containerd[4479]: time="2025-12-02T15:29:26.202810162Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 02 15:29:27 functional-748804 containerd[4479]: time="2025-12-02T15:29:27.297404155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kubernetes-dashboard-b84665fb8-qvfrp,Uid:1f9cac4a-1ab2-413d-9967-a023719d8122,Namespace:kubernetes-dashboard,Attempt:0,}"
	Dec 02 15:29:27 functional-748804 containerd[4479]: time="2025-12-02T15:29:27.329402921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:dashboard-metrics-scraper-5565989548-8p6fj,Uid:ad96cdb8-fb98-4e24-9510-f4c483deb625,Namespace:kubernetes-dashboard,Attempt:0,}"
	Dec 02 15:29:27 functional-748804 containerd[4479]: time="2025-12-02T15:29:27.343086832Z" level=info msg="connecting to shim 07c35019d4dd28391814f21608336a306d8e05e7b7aee67f8abb8b7c5c3dee97" address="unix:///run/containerd/s/75058637e6ec7e7e377ec23fc5fed4bd5fb15aa5d8ca55db6882ec7d2a228270" namespace=k8s.io protocol=ttrpc version=3
	Dec 02 15:29:27 functional-748804 containerd[4479]: time="2025-12-02T15:29:27.379504719Z" level=info msg="connecting to shim 650ca130452c5787932a3160b11b960790e578cd21c00aa674c1ca272ebdac3f" address="unix:///run/containerd/s/c5ab6a97ccc39393dae175afd6baf70cb5bbac2be6d1161cde6390fd02f56f93" namespace=k8s.io protocol=ttrpc version=3
	Dec 02 15:29:27 functional-748804 containerd[4479]: time="2025-12-02T15:29:27.416587449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kubernetes-dashboard-b84665fb8-qvfrp,Uid:1f9cac4a-1ab2-413d-9967-a023719d8122,Namespace:kubernetes-dashboard,Attempt:0,} returns sandbox id \"07c35019d4dd28391814f21608336a306d8e05e7b7aee67f8abb8b7c5c3dee97\""
	Dec 02 15:29:27 functional-748804 containerd[4479]: time="2025-12-02T15:29:27.453560958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:dashboard-metrics-scraper-5565989548-8p6fj,Uid:ad96cdb8-fb98-4e24-9510-f4c483deb625,Namespace:kubernetes-dashboard,Attempt:0,} returns sandbox id \"650ca130452c5787932a3160b11b960790e578cd21c00aa674c1ca272ebdac3f\""
	Dec 02 15:29:28 functional-748804 containerd[4479]: time="2025-12-02T15:29:28.261897463Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 15:29:28 functional-748804 containerd[4479]: time="2025-12-02T15:29:28.263176464Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396645"
	Dec 02 15:29:28 functional-748804 containerd[4479]: time="2025-12-02T15:29:28.264612242Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 15:29:28 functional-748804 containerd[4479]: time="2025-12-02T15:29:28.266714382Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 15:29:28 functional-748804 containerd[4479]: time="2025-12-02T15:29:28.267277142Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.064276264s"
	Dec 02 15:29:28 functional-748804 containerd[4479]: time="2025-12-02T15:29:28.267323605Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Dec 02 15:29:28 functional-748804 containerd[4479]: time="2025-12-02T15:29:28.268876659Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Dec 02 15:29:28 functional-748804 containerd[4479]: time="2025-12-02T15:29:28.272774337Z" level=info msg="CreateContainer within sandbox \"8ef8320ca87483d9b7bb5eb66029463fbfe83e56fac1ed2ede7f785279b2b16e\" for container &ContainerMetadata{Name:mount-munger,Attempt:0,}"
	Dec 02 15:29:28 functional-748804 containerd[4479]: time="2025-12-02T15:29:28.282947258Z" level=info msg="Container 8e4faaf1af6e0b08715571eee94bbdad1fdaac4808f24be0c5c93182c221596f: CDI devices from CRI Config.CDIDevices: []"
	Dec 02 15:29:28 functional-748804 containerd[4479]: time="2025-12-02T15:29:28.291114542Z" level=info msg="CreateContainer within sandbox \"8ef8320ca87483d9b7bb5eb66029463fbfe83e56fac1ed2ede7f785279b2b16e\" for &ContainerMetadata{Name:mount-munger,Attempt:0,} returns container id \"8e4faaf1af6e0b08715571eee94bbdad1fdaac4808f24be0c5c93182c221596f\""
	Dec 02 15:29:28 functional-748804 containerd[4479]: time="2025-12-02T15:29:28.291863804Z" level=info msg="StartContainer for \"8e4faaf1af6e0b08715571eee94bbdad1fdaac4808f24be0c5c93182c221596f\""
	Dec 02 15:29:28 functional-748804 containerd[4479]: time="2025-12-02T15:29:28.293086081Z" level=info msg="connecting to shim 8e4faaf1af6e0b08715571eee94bbdad1fdaac4808f24be0c5c93182c221596f" address="unix:///run/containerd/s/bfaf823ea6a62a7fc3762b06e439e5bf35cf3985b3c72ef63b99e7099c7c07e6" protocol=ttrpc version=3
	
	
	==> coredns [467501febc847e467dc2b1bb7632a8f5e694cd7b2bfb6697262857352e0c72ca] <==
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:59950 - 10790 "HINFO IN 5936427929830372433.4033215274638678948. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.027399913s
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	
	
	==> coredns [b35dbd9505dcbba128c22f8c3b17f1dedfc5404d131acbfc9c2360bae30ebdd4] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:33602 - 24598 "HINFO IN 8541481598738272659.1752149788769955562. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.031231862s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-748804
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-748804
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689
	                    minikube.k8s.io/name=functional-748804
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T15_28_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 15:28:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-748804
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 15:29:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 15:29:03 +0000   Tue, 02 Dec 2025 15:28:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 15:29:03 +0000   Tue, 02 Dec 2025 15:28:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 15:29:03 +0000   Tue, 02 Dec 2025 15:28:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 15:29:03 +0000   Tue, 02 Dec 2025 15:28:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-748804
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                4f8df55b-decd-494b-acfb-0d7449c62078
	  Boot ID:                    54b7568c-9bf9-47f9-8d68-e36a3a33af00
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox-mount                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  default                     hello-node-5758569b79-n9fw4                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5s
	  kube-system                 coredns-7d764666f9-hbkc9                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     78s
	  kube-system                 etcd-functional-748804                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         84s
	  kube-system                 kindnet-mr459                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      78s
	  kube-system                 kube-apiserver-functional-748804              250m (3%)     0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-controller-manager-functional-748804     200m (2%)     0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 kube-proxy-lcgn8                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 kube-scheduler-functional-748804              100m (1%)     0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         78s
	  kubernetes-dashboard        dashboard-metrics-scraper-5565989548-8p6fj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-qvfrp          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  80s   node-controller  Node functional-748804 event: Registered Node functional-748804 in Controller
	  Normal  RegisteredNode  22s   node-controller  Node functional-748804 event: Registered Node functional-748804 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff c2 f5 ca ac 67 17 08 06
	[ +13.571564] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 56 96 e2 dd 40 21 08 06
	[  +0.000361] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff c2 f5 ca ac 67 17 08 06
	[  +2.699615] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 7e 77 c0 d8 ea 13 08 06
	[Dec 2 14:52] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 06 3c f9 c8 55 0b 08 06
	[  +0.118748] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 02 d8 9f 3f ef 99 08 06
	[  +0.856727] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b2 fb 9f 63 58 4b 08 06
	[ +14.974602] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff aa c3 c5 ff a1 a9 08 06
	[  +0.000340] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 7e 77 c0 d8 ea 13 08 06
	[  +2.666742] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 5e 20 e4 1d 98 08 06
	[  +0.000370] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 fb 9f 63 58 4b 08 06
	[ +24.223711] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 09 24 19 b9 42 08 06
	[  +0.000349] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 02 d8 9f 3f ef 99 08 06
	
	
	==> etcd [0fd3f9a9df703f05808ad0be200c0376f9990b42a6e6db124573d8d8aea41d62] <==
	{"level":"warn","ts":"2025-12-02T15:28:01.705301Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:28:01.712264Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:28:01.719006Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:28:01.762879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:28:03.766021Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"107.359771ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/poddisruptionbudgets\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-02T15:28:03.766112Z","caller":"traceutil/trace.go:172","msg":"trace[687264165] range","detail":"{range_begin:/registry/poddisruptionbudgets; range_end:; response_count:0; response_revision:207; }","duration":"107.4911ms","start":"2025-12-02T15:28:03.658606Z","end":"2025-12-02T15:28:03.766097Z","steps":["trace[687264165] 'agreement among raft nodes before linearized reading'  (duration: 47.402237ms)","trace[687264165] 'range keys from in-memory index tree'  (duration: 59.913776ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-02T15:28:03.766198Z","caller":"traceutil/trace.go:172","msg":"trace[1479526730] transaction","detail":"{read_only:false; response_revision:208; number_of_response:1; }","duration":"108.02388ms","start":"2025-12-02T15:28:03.658152Z","end":"2025-12-02T15:28:03.766176Z","steps":["trace[1479526730] 'process raft request'  (duration: 47.910328ms)","trace[1479526730] 'compare'  (duration: 59.848277ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-02T15:29:00.675066Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-02T15:29:00.675150Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-748804","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-12-02T15:29:00.675262Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-02T15:29:00.676779Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-02T15:29:00.676839Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-02T15:29:00.676881Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-12-02T15:29:00.676941Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-02T15:29:00.676941Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-02T15:29:00.676967Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-02T15:29:00.677017Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-02T15:29:00.676987Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-02T15:29:00.677030Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-02T15:29:00.677045Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-02T15:29:00.677060Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-02T15:29:00.679245Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-12-02T15:29:00.679302Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-02T15:29:00.679344Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-12-02T15:29:00.679377Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-748804","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [133eb1abf8f5a0e3e8ce65d6e6ebf24893cd038f129c8a429e6545b040014e17] <==
	{"level":"warn","ts":"2025-12-02T15:29:02.967009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:02.974297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:02.984117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:02.992743Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.006763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.015100Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.022088Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.034320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.041243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.048661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.055653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.062574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.070913Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.077381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.084291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.091309Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.098135Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.104641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.112398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.119294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.135772Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.144860Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.152024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.158695Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.212894Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33332","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 15:29:28 up  2:11,  0 user,  load average: 0.77, 0.51, 0.67
	Linux functional-748804 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [511bcd0a11b99f5dc7b64a5fdb7f3344c73d85de581a14e9bb569220007ce972] <==
	I1202 15:28:51.446906       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1202 15:28:51.447573       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1202 15:28:51.448120       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1202 15:28:51.448166       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1202 15:28:51.503049       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1202 15:28:51.603108       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1202 15:28:52.319996       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1202 15:28:52.366416       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1202 15:28:52.486843       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1202 15:28:52.661617       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1202 15:28:54.378303       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1202 15:28:54.833562       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1202 15:28:54.921937       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1202 15:28:55.182023       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1202 15:28:57.590709       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1202 15:28:59.719866       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1202 15:29:00.525721       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1202 15:29:00.918682       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1202 15:29:10.748030       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1202 15:29:10.748072       1 metrics.go:72] Registering metrics
	I1202 15:29:10.748165       1 controller.go:711] "Syncing nftables rules"
	I1202 15:29:11.446957       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:29:11.447069       1 main.go:301] handling current node
	I1202 15:29:21.447592       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:29:21.447632       1 main.go:301] handling current node
	
	
	==> kindnet [71dd3db26040e5b2ca6139b6c4624cc876a85ee5da6a3af870c7bf7350b68965] <==
	I1202 15:28:13.964841       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1202 15:28:13.965136       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1202 15:28:13.965321       1 main.go:148] setting mtu 1500 for CNI 
	I1202 15:28:13.965340       1 main.go:178] kindnetd IP family: "ipv4"
	I1202 15:28:13.965372       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-02T15:28:14Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1202 15:28:14.167511       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1202 15:28:14.167537       1 controller.go:381] "Waiting for informer caches to sync"
	I1202 15:28:14.167547       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1202 15:28:14.262349       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1202 15:28:14.667734       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1202 15:28:14.667761       1 metrics.go:72] Registering metrics
	I1202 15:28:14.667822       1 controller.go:711] "Syncing nftables rules"
	I1202 15:28:24.167082       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:28:24.167201       1 main.go:301] handling current node
	I1202 15:28:34.174488       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:28:34.174527       1 main.go:301] handling current node
	I1202 15:28:44.170814       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:28:44.170903       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3b2e144de157e70de00d1b1ca9af127bb60c21bb6d622d6dbb8ac9301905bfde] <==
	I1202 15:29:03.672860       1 aggregator.go:187] initial CRD sync complete...
	I1202 15:29:03.672873       1 autoregister_controller.go:144] Starting autoregister controller
	I1202 15:29:03.672881       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1202 15:29:03.672888       1 cache.go:39] Caches are synced for autoregister controller
	I1202 15:29:03.673105       1 shared_informer.go:377] "Caches are synced"
	I1202 15:29:03.673180       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1202 15:29:03.673201       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1202 15:29:03.675016       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1202 15:29:03.677720       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1202 15:29:03.701547       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1202 15:29:03.805605       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1202 15:29:04.577690       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	W1202 15:29:04.883473       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1202 15:29:04.885014       1 controller.go:667] quota admission added evaluator for: endpoints
	I1202 15:29:04.891213       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1202 15:29:05.609614       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1202 15:29:05.721038       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1202 15:29:05.788248       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1202 15:29:05.795890       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1202 15:29:19.113146       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.96.196.24"}
	I1202 15:29:23.230433       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1202 15:29:23.341846       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.97.140.74"}
	I1202 15:29:26.872818       1 controller.go:667] quota admission added evaluator for: namespaces
	I1202 15:29:26.999224       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.234.80"}
	I1202 15:29:27.026547       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.234.122"}
	
	
	==> kube-controller-manager [5fd6717e2dd4392afbf6f9c8c694a9fb1e9b933d75da91e35b2a86060be7f451] <==
	I1202 15:29:06.800081       1 shared_informer.go:377] "Caches are synced"
	I1202 15:29:06.800122       1 shared_informer.go:377] "Caches are synced"
	I1202 15:29:06.799219       1 shared_informer.go:377] "Caches are synced"
	I1202 15:29:06.800334       1 shared_informer.go:377] "Caches are synced"
	I1202 15:29:06.800435       1 shared_informer.go:377] "Caches are synced"
	I1202 15:29:06.800465       1 shared_informer.go:377] "Caches are synced"
	I1202 15:29:06.800507       1 shared_informer.go:377] "Caches are synced"
	I1202 15:29:06.799366       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1202 15:29:06.800559       1 shared_informer.go:377] "Caches are synced"
	I1202 15:29:06.800812       1 shared_informer.go:377] "Caches are synced"
	I1202 15:29:06.800871       1 shared_informer.go:377] "Caches are synced"
	I1202 15:29:06.802642       1 shared_informer.go:377] "Caches are synced"
	I1202 15:29:06.804077       1 shared_informer.go:377] "Caches are synced"
	I1202 15:29:06.812453       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 15:29:06.900394       1 shared_informer.go:377] "Caches are synced"
	I1202 15:29:06.900417       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1202 15:29:06.900422       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1202 15:29:06.912983       1 shared_informer.go:377] "Caches are synced"
	E1202 15:29:26.926732       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:29:26.931750       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:29:26.935883       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:29:26.937394       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:29:26.941939       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:29:26.946876       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:29:26.946893       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [c61260d97419239c41be460f25d083080904cbb7018b647cf4294cbdcf2470b3] <==
	I1202 15:28:51.276093       1 serving.go:386] Generated self-signed cert in-memory
	I1202 15:28:51.283278       1 controllermanager.go:189] "Starting" version="v1.35.0-beta.0"
	I1202 15:28:51.283306       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 15:28:51.285100       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1202 15:28:51.285154       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1202 15:28:51.285318       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1202 15:28:51.285367       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1202 15:29:01.287242       1 controllermanager.go:250] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-proxy [2b69351c8a77e410a053982397818fcb33f606f013f50c531e760cd0be7136f5] <==
	I1202 15:28:51.085789       1 server_linux.go:53] "Using iptables proxy"
	I1202 15:28:51.158084       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 15:29:06.858497       1 shared_informer.go:377] "Caches are synced"
	I1202 15:29:06.858541       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1202 15:29:06.858650       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 15:29:06.882408       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 15:29:06.882468       1 server_linux.go:136] "Using iptables Proxier"
	I1202 15:29:06.888994       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 15:29:06.889421       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1202 15:29:06.889456       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 15:29:06.891000       1 config.go:200] "Starting service config controller"
	I1202 15:29:06.891018       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 15:29:06.891032       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 15:29:06.891038       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 15:29:06.891032       1 config.go:106] "Starting endpoint slice config controller"
	I1202 15:29:06.891050       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 15:29:06.891089       1 config.go:309] "Starting node config controller"
	I1202 15:29:06.891096       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 15:29:06.991280       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 15:29:06.991321       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1202 15:29:06.991348       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1202 15:29:06.991330       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-proxy [cbf183f96ad458cca4889f2d8498a70c350f894b4a6caf224691fa74849d8862] <==
	I1202 15:28:10.737057       1 server_linux.go:53] "Using iptables proxy"
	I1202 15:28:10.814644       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 15:28:10.915846       1 shared_informer.go:377] "Caches are synced"
	I1202 15:28:10.915896       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1202 15:28:10.916036       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 15:28:10.989739       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 15:28:10.989815       1 server_linux.go:136] "Using iptables Proxier"
	I1202 15:28:10.995599       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 15:28:10.996214       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1202 15:28:10.996241       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 15:28:10.998971       1 config.go:200] "Starting service config controller"
	I1202 15:28:10.999004       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 15:28:10.999026       1 config.go:106] "Starting endpoint slice config controller"
	I1202 15:28:10.999031       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 15:28:10.999046       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 15:28:10.999050       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 15:28:10.999197       1 config.go:309] "Starting node config controller"
	I1202 15:28:10.999231       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 15:28:10.999244       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 15:28:11.099278       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 15:28:11.099306       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1202 15:28:11.099314       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [180ffdb115bcf7d869265aab1df2cb6f33f07745c05d1b65de6c107ce8e2de1a] <==
	E1202 15:28:03.202206       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\""
	E1202 15:28:03.203345       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1202 15:28:03.217452       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope"
	E1202 15:28:03.218466       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1202 15:28:03.227657       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope"
	E1202 15:28:03.228764       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1202 15:28:03.251113       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope"
	E1202 15:28:03.252303       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1202 15:28:03.317017       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope"
	E1202 15:28:03.318198       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1202 15:28:03.352431       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope"
	E1202 15:28:03.353484       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1202 15:28:03.468006       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="namespaces is forbidden: User \"system:kube-scheduler\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope"
	E1202 15:28:03.469092       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1202 15:28:03.481689       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope"
	E1202 15:28:03.482819       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1202 15:28:03.494989       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope"
	E1202 15:28:03.495927       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	I1202 15:28:05.674207       1 shared_informer.go:377] "Caches are synced"
	I1202 15:28:50.457138       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1202 15:28:50.457038       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 15:28:50.457268       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1202 15:28:50.457302       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1202 15:28:50.457310       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1202 15:28:50.457335       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [a119c7112144b4151b97909036d18ec5009dba3db288ef99bc67561e90c3c78a] <==
	I1202 15:28:51.290257       1 serving.go:386] Generated self-signed cert in-memory
	W1202 15:28:51.292769       1 authentication.go:397] Error looking up in-cluster authentication configuration: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.49.2:8441: connect: connection refused
	W1202 15:28:51.292799       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1202 15:28:51.292808       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1202 15:28:51.300270       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1202 15:28:51.300302       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 15:28:51.302116       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 15:28:51.302153       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 15:28:51.302193       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1202 15:28:51.302473       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1202 15:29:11.803121       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 02 15:29:12 functional-748804 kubelet[5467]: E1202 15:29:12.777420    5467 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-748804" containerName="kube-scheduler"
	Dec 02 15:29:13 functional-748804 kubelet[5467]: E1202 15:29:13.233064    5467 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-748804" containerName="kube-apiserver"
	Dec 02 15:29:13 functional-748804 kubelet[5467]: E1202 15:29:13.600752    5467 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-748804" containerName="kube-controller-manager"
	Dec 02 15:29:13 functional-748804 kubelet[5467]: E1202 15:29:13.804098    5467 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-748804" containerName="kube-apiserver"
	Dec 02 15:29:14 functional-748804 kubelet[5467]: E1202 15:29:14.599942    5467 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-748804" containerName="etcd"
	Dec 02 15:29:14 functional-748804 kubelet[5467]: E1202 15:29:14.806904    5467 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-748804" containerName="etcd"
	Dec 02 15:29:17 functional-748804 kubelet[5467]: E1202 15:29:17.256280    5467 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-hbkc9" containerName="coredns"
	Dec 02 15:29:19 functional-748804 kubelet[5467]: I1202 15:29:19.205298    5467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qvzg\" (UniqueName: \"kubernetes.io/projected/347dacd5-9c22-4537-b91d-a15a183f5981-kube-api-access-2qvzg\") pod \"invalid-svc\" (UID: \"347dacd5-9c22-4537-b91d-a15a183f5981\") " pod="default/invalid-svc"
	Dec 02 15:29:20 functional-748804 kubelet[5467]: E1202 15:29:20.127274    5467 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nonexistingimage:latest\": failed to resolve reference \"docker.io/library/nonexistingimage:latest\": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed" image="nonexistingimage:latest"
	Dec 02 15:29:20 functional-748804 kubelet[5467]: E1202 15:29:20.127375    5467 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/nonexistingimage:latest\": failed to resolve reference \"docker.io/library/nonexistingimage:latest\": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed" image="nonexistingimage:latest"
	Dec 02 15:29:20 functional-748804 kubelet[5467]: E1202 15:29:20.127748    5467 kuberuntime_manager.go:1664] "Unhandled Error" err="container nginx start failed in pod invalid-svc_default(347dacd5-9c22-4537-b91d-a15a183f5981): ErrImagePull: failed to pull and unpack image \"docker.io/library/nonexistingimage:latest\": failed to resolve reference \"docker.io/library/nonexistingimage:latest\": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed" logger="UnhandledError"
	Dec 02 15:29:20 functional-748804 kubelet[5467]: E1202 15:29:20.127800    5467 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/nonexistingimage:latest\\\": failed to resolve reference \\\"docker.io/library/nonexistingimage:latest\\\": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed\"" pod="default/invalid-svc" podUID="347dacd5-9c22-4537-b91d-a15a183f5981"
	Dec 02 15:29:20 functional-748804 kubelet[5467]: E1202 15:29:20.823520    5467 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nonexistingimage:latest\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nonexistingimage:latest\\\": failed to resolve reference \\\"docker.io/library/nonexistingimage:latest\\\": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed\"" pod="default/invalid-svc" podUID="347dacd5-9c22-4537-b91d-a15a183f5981"
	Dec 02 15:29:22 functional-748804 kubelet[5467]: I1202 15:29:22.829238    5467 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/347dacd5-9c22-4537-b91d-a15a183f5981-kube-api-access-2qvzg\" (UniqueName: \"kubernetes.io/projected/347dacd5-9c22-4537-b91d-a15a183f5981-kube-api-access-2qvzg\") pod \"347dacd5-9c22-4537-b91d-a15a183f5981\" (UID: \"347dacd5-9c22-4537-b91d-a15a183f5981\") "
	Dec 02 15:29:22 functional-748804 kubelet[5467]: I1202 15:29:22.831708    5467 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/347dacd5-9c22-4537-b91d-a15a183f5981-kube-api-access-2qvzg" pod "347dacd5-9c22-4537-b91d-a15a183f5981" (UID: "347dacd5-9c22-4537-b91d-a15a183f5981"). InnerVolumeSpecName "kube-api-access-2qvzg". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 02 15:29:22 functional-748804 kubelet[5467]: I1202 15:29:22.930156    5467 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2qvzg\" (UniqueName: \"kubernetes.io/projected/347dacd5-9c22-4537-b91d-a15a183f5981-kube-api-access-2qvzg\") on node \"functional-748804\" DevicePath \"\""
	Dec 02 15:29:23 functional-748804 kubelet[5467]: I1202 15:29:23.434044    5467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7qfd\" (UniqueName: \"kubernetes.io/projected/825e0df2-e770-45a9-8aad-cf0aa2936171-kube-api-access-k7qfd\") pod \"hello-node-5758569b79-n9fw4\" (UID: \"825e0df2-e770-45a9-8aad-cf0aa2936171\") " pod="default/hello-node-5758569b79-n9fw4"
	Dec 02 15:29:23 functional-748804 kubelet[5467]: I1202 15:29:23.731480    5467 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="347dacd5-9c22-4537-b91d-a15a183f5981" path="/var/lib/kubelet/pods/347dacd5-9c22-4537-b91d-a15a183f5981/volumes"
	Dec 02 15:29:25 functional-748804 kubelet[5467]: I1202 15:29:25.951651    5467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvm2g\" (UniqueName: \"kubernetes.io/projected/861ada58-f518-4422-acaf-49337872cb3c-kube-api-access-jvm2g\") pod \"busybox-mount\" (UID: \"861ada58-f518-4422-acaf-49337872cb3c\") " pod="default/busybox-mount"
	Dec 02 15:29:25 functional-748804 kubelet[5467]: I1202 15:29:25.951788    5467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/861ada58-f518-4422-acaf-49337872cb3c-test-volume\") pod \"busybox-mount\" (UID: \"861ada58-f518-4422-acaf-49337872cb3c\") " pod="default/busybox-mount"
	Dec 02 15:29:26 functional-748804 kubelet[5467]: I1202 15:29:26.979270    5467 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="default/hello-node-5758569b79-n9fw4" podStartSLOduration=2.139633552 podStartE2EDuration="3.979243199s" podCreationTimestamp="2025-12-02 15:29:23 +0000 UTC" firstStartedPulling="2025-12-02 15:29:23.723134155 +0000 UTC m=+22.090688462" lastFinishedPulling="2025-12-02 15:29:25.562743816 +0000 UTC m=+23.930298109" observedRunningTime="2025-12-02 15:29:25.85720841 +0000 UTC m=+24.224762724" watchObservedRunningTime="2025-12-02 15:29:26.979243199 +0000 UTC m=+25.346797514"
	Dec 02 15:29:27 functional-748804 kubelet[5467]: I1202 15:29:27.160694    5467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1f9cac4a-1ab2-413d-9967-a023719d8122-tmp-volume\") pod \"kubernetes-dashboard-b84665fb8-qvfrp\" (UID: \"1f9cac4a-1ab2-413d-9967-a023719d8122\") " pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-qvfrp"
	Dec 02 15:29:27 functional-748804 kubelet[5467]: I1202 15:29:27.160842    5467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt8nw\" (UniqueName: \"kubernetes.io/projected/1f9cac4a-1ab2-413d-9967-a023719d8122-kube-api-access-kt8nw\") pod \"kubernetes-dashboard-b84665fb8-qvfrp\" (UID: \"1f9cac4a-1ab2-413d-9967-a023719d8122\") " pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-qvfrp"
	Dec 02 15:29:27 functional-748804 kubelet[5467]: I1202 15:29:27.160917    5467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmnds\" (UniqueName: \"kubernetes.io/projected/ad96cdb8-fb98-4e24-9510-f4c483deb625-kube-api-access-wmnds\") pod \"dashboard-metrics-scraper-5565989548-8p6fj\" (UID: \"ad96cdb8-fb98-4e24-9510-f4c483deb625\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-8p6fj"
	Dec 02 15:29:27 functional-748804 kubelet[5467]: I1202 15:29:27.160947    5467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ad96cdb8-fb98-4e24-9510-f4c483deb625-tmp-volume\") pod \"dashboard-metrics-scraper-5565989548-8p6fj\" (UID: \"ad96cdb8-fb98-4e24-9510-f4c483deb625\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-8p6fj"
	
	
	==> storage-provisioner [3e94dbe1375f92abe0a40b96e91788d0ca512d5269e4554d385391ff28bad723] <==
	I1202 15:28:50.986751       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1202 15:28:50.990376       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [8c7bed2b3aa7476230e316a6c487c2dd5357a8d502b2c00552b544a3df23db7a] <==
	I1202 15:29:04.085838       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1202 15:29:04.094010       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1202 15:29:04.094049       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1202 15:29:04.096592       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:29:07.552776       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:29:11.813087       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:29:15.412247       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:29:18.466292       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:29:21.489456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:29:21.494359       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1202 15:29:21.494543       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1202 15:29:21.494685       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a66fa701-f005-42ae-a047-dce44332b2a6", APIVersion:"v1", ResourceVersion:"591", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-748804_97b335ba-275a-4ed2-9551-eb77b3aeb185 became leader
	I1202 15:29:21.494783       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-748804_97b335ba-275a-4ed2-9551-eb77b3aeb185!
	W1202 15:29:21.496994       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:29:21.500634       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1202 15:29:21.595851       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-748804_97b335ba-275a-4ed2-9551-eb77b3aeb185!
	W1202 15:29:23.504916       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:29:23.510972       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:29:25.515372       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:29:25.521599       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:29:27.525999       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:29:27.530653       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-748804 -n functional-748804
helpers_test.go:269: (dbg) Run:  kubectl --context functional-748804 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount dashboard-metrics-scraper-5565989548-8p6fj kubernetes-dashboard-b84665fb8-qvfrp
helpers_test.go:282: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-748804 describe pod busybox-mount dashboard-metrics-scraper-5565989548-8p6fj kubernetes-dashboard-b84665fb8-qvfrp
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-748804 describe pod busybox-mount dashboard-metrics-scraper-5565989548-8p6fj kubernetes-dashboard-b84665fb8-qvfrp: exit status 1 (73.807593ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-748804/192.168.49.2
	Start Time:       Tue, 02 Dec 2025 15:29:25 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  mount-munger:
	    Container ID:  containerd://8e4faaf1af6e0b08715571eee94bbdad1fdaac4808f24be0c5c93182c221596f
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Tue, 02 Dec 2025 15:29:28 +0000
	      Finished:     Tue, 02 Dec 2025 15:29:28 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jvm2g (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-jvm2g:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  4s    default-scheduler  Successfully assigned default/busybox-mount to functional-748804
	  Normal  Pulling    3s    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     1s    kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.065s (2.066s including waiting). Image size: 2395207 bytes.
	  Normal  Created    1s    kubelet            Container created
	  Normal  Started    1s    kubelet            Container started

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5565989548-8p6fj" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-qvfrp" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-748804 describe pod busybox-mount dashboard-metrics-scraper-5565989548-8p6fj kubernetes-dashboard-b84665fb8-qvfrp: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (3.69s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (603.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-748804 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-748804 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-9f67c86d4-45wbs" [266c903b-b1d9-4f00-bf97-7602875437e3] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1645: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-748804 -n functional-748804
functional_test.go:1645: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-12-02 15:39:37.517296389 +0000 UTC m=+1814.880189889
functional_test.go:1645: (dbg) Run:  kubectl --context functional-748804 describe po hello-node-connect-9f67c86d4-45wbs -n default
functional_test.go:1645: (dbg) kubectl --context functional-748804 describe po hello-node-connect-9f67c86d4-45wbs -n default:
Name:             hello-node-connect-9f67c86d4-45wbs
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-748804/192.168.49.2
Start Time:       Tue, 02 Dec 2025 15:29:37 +0000
Labels:           app=hello-node-connect
pod-template-hash=9f67c86d4
Annotations:      <none>
Status:           Pending
IP:               10.244.0.11
IPs:
IP:           10.244.0.11
Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k7vx5 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-k7vx5:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-45wbs to functional-748804
Normal   Pulling    6m53s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m50s (x5 over 9m53s)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests
toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed   6m50s (x5 over 9m53s)   kubelet  Error: ErrImagePull
Warning  Failed   4m48s (x20 over 9m53s)  kubelet  Error: ImagePullBackOff
Normal   BackOff  4m34s (x21 over 9m53s)  kubelet  Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-748804 logs hello-node-connect-9f67c86d4-45wbs -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-748804 logs hello-node-connect-9f67c86d4-45wbs -n default: exit status 1 (76.864535ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-9f67c86d4-45wbs" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-748804 logs hello-node-connect-9f67c86d4-45wbs -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-748804 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-9f67c86d4-45wbs
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-748804/192.168.49.2
Start Time:       Tue, 02 Dec 2025 15:29:37 +0000
Labels:           app=hello-node-connect
pod-template-hash=9f67c86d4
Annotations:      <none>
Status:           Pending
IP:               10.244.0.11
IPs:
IP:           10.244.0.11
Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k7vx5 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-k7vx5:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-45wbs to functional-748804
Normal   Pulling    6m53s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m50s (x5 over 9m53s)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests
toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed   6m50s (x5 over 9m53s)   kubelet  Error: ErrImagePull
Warning  Failed   4m48s (x20 over 9m53s)  kubelet  Error: ImagePullBackOff
Normal   BackOff  4m34s (x21 over 9m53s)  kubelet  Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-748804 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-748804 logs -l app=hello-node-connect: exit status 1 (69.352129ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-9f67c86d4-45wbs" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-748804 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-748804 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.96.99.226
IPs:                      10.96.99.226
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32159/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-748804
helpers_test.go:243: (dbg) docker inspect functional-748804:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6222e78e642133b36889ac3edf0ea2d5042d373f1833e7703e046f5ed14e24ac",
	        "Created": "2025-12-02T15:27:45.585626306Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 460868,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T15:27:45.624629841Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/6222e78e642133b36889ac3edf0ea2d5042d373f1833e7703e046f5ed14e24ac/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6222e78e642133b36889ac3edf0ea2d5042d373f1833e7703e046f5ed14e24ac/hostname",
	        "HostsPath": "/var/lib/docker/containers/6222e78e642133b36889ac3edf0ea2d5042d373f1833e7703e046f5ed14e24ac/hosts",
	        "LogPath": "/var/lib/docker/containers/6222e78e642133b36889ac3edf0ea2d5042d373f1833e7703e046f5ed14e24ac/6222e78e642133b36889ac3edf0ea2d5042d373f1833e7703e046f5ed14e24ac-json.log",
	        "Name": "/functional-748804",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-748804:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-748804",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6222e78e642133b36889ac3edf0ea2d5042d373f1833e7703e046f5ed14e24ac",
	                "LowerDir": "/var/lib/docker/overlay2/671e9c5d889f651edf5f697f181fc8d047a0da384c7d026f1c7810abd59bc372-init/diff:/var/lib/docker/overlay2/b24a03799b584404f04c044a7327612eb3ab66b1330d1bf57134456e5f41230d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/671e9c5d889f651edf5f697f181fc8d047a0da384c7d026f1c7810abd59bc372/merged",
	                "UpperDir": "/var/lib/docker/overlay2/671e9c5d889f651edf5f697f181fc8d047a0da384c7d026f1c7810abd59bc372/diff",
	                "WorkDir": "/var/lib/docker/overlay2/671e9c5d889f651edf5f697f181fc8d047a0da384c7d026f1c7810abd59bc372/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-748804",
	                "Source": "/var/lib/docker/volumes/functional-748804/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-748804",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-748804",
	                "name.minikube.sigs.k8s.io": "functional-748804",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "ef0274eff8a62ef04c066c3fb70728d1f398f09a7f7467a4dc6d4783d563c894",
	            "SandboxKey": "/var/run/docker/netns/ef0274eff8a6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33170"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33171"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33174"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33172"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33173"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-748804": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "45059eff2ea7711cb8d23ef3b916e84c51ac669d185a5efdcdc6c56158ffc5eb",
	                    "EndpointID": "3dc2c96d7473c5c48b900489a532766ff181ebe724b42e6cc7869e9199bcb6a2",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "fa:b3:b6:48:c7:62",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-748804",
	                        "6222e78e6421"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-748804 -n functional-748804
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-748804 logs -n 25: (1.411178062s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                  ARGS                                                  │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service        │ functional-748804 service hello-node --url                                                             │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │ 02 Dec 25 15:29 UTC │
	│ ssh            │ functional-748804 ssh cat /etc/hostname                                                                │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │ 02 Dec 25 15:29 UTC │
	│ ssh            │ functional-748804 ssh findmnt -T /mount1                                                               │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │ 02 Dec 25 15:29 UTC │
	│ ssh            │ functional-748804 ssh sudo cat /etc/ssl/certs/406799.pem                                               │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │ 02 Dec 25 15:29 UTC │
	│ ssh            │ functional-748804 ssh findmnt -T /mount2                                                               │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │ 02 Dec 25 15:29 UTC │
	│ ssh            │ functional-748804 ssh sudo cat /usr/share/ca-certificates/406799.pem                                   │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │ 02 Dec 25 15:29 UTC │
	│ tunnel         │ functional-748804 tunnel --alsologtostderr                                                             │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │                     │
	│ ssh            │ functional-748804 ssh findmnt -T /mount3                                                               │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │ 02 Dec 25 15:29 UTC │
	│ ssh            │ functional-748804 ssh sudo cat /etc/ssl/certs/51391683.0                                               │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │ 02 Dec 25 15:29 UTC │
	│ mount          │ -p functional-748804 --kill=true                                                                       │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │                     │
	│ ssh            │ functional-748804 ssh sudo cat /etc/test/nested/copy/406799/hosts                                      │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │ 02 Dec 25 15:29 UTC │
	│ ssh            │ functional-748804 ssh sudo cat /etc/ssl/certs/4067992.pem                                              │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │ 02 Dec 25 15:29 UTC │
	│ addons         │ functional-748804 addons list                                                                          │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │ 02 Dec 25 15:29 UTC │
	│ addons         │ functional-748804 addons list -o json                                                                  │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │ 02 Dec 25 15:29 UTC │
	│ ssh            │ functional-748804 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                               │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │ 02 Dec 25 15:29 UTC │
	│ image          │ functional-748804 image ls --format short --alsologtostderr                                            │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:35 UTC │ 02 Dec 25 15:35 UTC │
	│ image          │ functional-748804 image ls --format yaml --alsologtostderr                                             │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:35 UTC │ 02 Dec 25 15:35 UTC │
	│ ssh            │ functional-748804 ssh pgrep buildkitd                                                                  │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:35 UTC │                     │
	│ image          │ functional-748804 image build -t localhost/my-image:functional-748804 testdata/build --alsologtostderr │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:35 UTC │ 02 Dec 25 15:35 UTC │
	│ image          │ functional-748804 image ls --format json --alsologtostderr                                             │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:35 UTC │ 02 Dec 25 15:35 UTC │
	│ image          │ functional-748804 image ls --format table --alsologtostderr                                            │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:35 UTC │ 02 Dec 25 15:35 UTC │
	│ update-context │ functional-748804 update-context --alsologtostderr -v=2                                                │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:35 UTC │ 02 Dec 25 15:35 UTC │
	│ update-context │ functional-748804 update-context --alsologtostderr -v=2                                                │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:35 UTC │ 02 Dec 25 15:35 UTC │
	│ update-context │ functional-748804 update-context --alsologtostderr -v=2                                                │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:35 UTC │ 02 Dec 25 15:35 UTC │
	│ image          │ functional-748804 image ls                                                                             │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:35 UTC │ 02 Dec 25 15:35 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 15:29:26
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 15:29:26.139222  472486 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:29:26.139339  472486 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:29:26.139346  472486 out.go:374] Setting ErrFile to fd 2...
	I1202 15:29:26.139351  472486 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:29:26.139620  472486 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-403182/.minikube/bin
	I1202 15:29:26.140110  472486 out.go:368] Setting JSON to false
	I1202 15:29:26.141235  472486 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":7908,"bootTime":1764681458,"procs":232,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 15:29:26.141317  472486 start.go:143] virtualization: kvm guest
	I1202 15:29:26.143611  472486 out.go:179] * [functional-748804] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 15:29:26.145431  472486 notify.go:221] Checking for updates...
	I1202 15:29:26.145475  472486 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 15:29:26.147028  472486 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 15:29:26.148571  472486 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-403182/kubeconfig
	I1202 15:29:26.151226  472486 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-403182/.minikube
	I1202 15:29:26.152690  472486 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 15:29:26.153938  472486 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 15:29:26.155740  472486 config.go:182] Loaded profile config "functional-748804": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 15:29:26.156423  472486 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 15:29:26.185109  472486 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 15:29:26.185249  472486 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 15:29:26.258186  472486 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-02 15:29:26.246627139 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 15:29:26.258363  472486 docker.go:319] overlay module found
	I1202 15:29:26.260367  472486 out.go:179] * Using the docker driver based on existing profile
	I1202 15:29:26.262125  472486 start.go:309] selected driver: docker
	I1202 15:29:26.262148  472486 start.go:927] validating driver "docker" against &{Name:functional-748804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-748804 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 15:29:26.262256  472486 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 15:29:26.262347  472486 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 15:29:26.323640  472486 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-02 15:29:26.3130954 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 15:29:26.324500  472486 cni.go:84] Creating CNI manager for ""
	I1202 15:29:26.324578  472486 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 15:29:26.324624  472486 start.go:353] cluster config:
	{Name:functional-748804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-748804 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 15:29:26.327384  472486 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	8e4faaf1af6e0       56cc512116c8f       10 minutes ago      Exited              mount-munger              0                   8ef8320ca8748       busybox-mount                               default
	c26834edb0b97       9056ab77afb8e       10 minutes ago      Running             echo-server               0                   883b5dde25cab       hello-node-5758569b79-n9fw4                 default
	8c7bed2b3aa74       6e38f40d628db       10 minutes ago      Running             storage-provisioner       2                   7c72eb9f1b949       storage-provisioner                         kube-system
	3b2e144de157e       aa9d02839d8de       10 minutes ago      Running             kube-apiserver            0                   6c5ffab4d9199       kube-apiserver-functional-748804            kube-system
	5fd6717e2dd43       45f3cc72d235f       10 minutes ago      Running             kube-controller-manager   2                   e41f4d9e9066c       kube-controller-manager-functional-748804   kube-system
	133eb1abf8f5a       a3e246e9556e9       10 minutes ago      Running             etcd                      1                   71bf699e9dad7       etcd-functional-748804                      kube-system
	2b69351c8a77e       8a4ded35a3eb1       10 minutes ago      Running             kube-proxy                1                   2d0e156464bf6       kube-proxy-lcgn8                            kube-system
	c61260d974192       45f3cc72d235f       10 minutes ago      Exited              kube-controller-manager   1                   e41f4d9e9066c       kube-controller-manager-functional-748804   kube-system
	a119c7112144b       7bb6219ddab95       10 minutes ago      Running             kube-scheduler            1                   6e835981c9dcd       kube-scheduler-functional-748804            kube-system
	3e94dbe1375f9       6e38f40d628db       10 minutes ago      Exited              storage-provisioner       1                   7c72eb9f1b949       storage-provisioner                         kube-system
	467501febc847       aa5e3ebc0dfed       10 minutes ago      Running             coredns                   1                   6baefaaf14fbf       coredns-7d764666f9-hbkc9                    kube-system
	511bcd0a11b99       409467f978b4a       10 minutes ago      Running             kindnet-cni               1                   f5df6f632b2da       kindnet-mr459                               kube-system
	b35dbd9505dcb       aa5e3ebc0dfed       11 minutes ago      Exited              coredns                   0                   6baefaaf14fbf       coredns-7d764666f9-hbkc9                    kube-system
	71dd3db26040e       409467f978b4a       11 minutes ago      Exited              kindnet-cni               0                   f5df6f632b2da       kindnet-mr459                               kube-system
	cbf183f96ad45       8a4ded35a3eb1       11 minutes ago      Exited              kube-proxy                0                   2d0e156464bf6       kube-proxy-lcgn8                            kube-system
	180ffdb115bcf       7bb6219ddab95       11 minutes ago      Exited              kube-scheduler            0                   6e835981c9dcd       kube-scheduler-functional-748804            kube-system
	0fd3f9a9df703       a3e246e9556e9       11 minutes ago      Exited              etcd                      0                   71bf699e9dad7       etcd-functional-748804                      kube-system
	
	
	==> containerd <==
	Dec 02 15:35:18 functional-748804 containerd[4479]: time="2025-12-02T15:35:18.728762617Z" level=info msg="PullImage \"docker.io/nginx:latest\""
	Dec 02 15:35:21 functional-748804 containerd[4479]: time="2025-12-02T15:35:21.355916682Z" level=error msg="PullImage \"docker.io/nginx:latest\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541: 429 Too Many Requests\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 02 15:35:21 functional-748804 containerd[4479]: time="2025-12-02T15:35:21.355945565Z" level=info msg="stop pulling image docker.io/library/nginx:latest: active requests=0, bytes read=21196"
	Dec 02 15:35:21 functional-748804 containerd[4479]: time="2025-12-02T15:35:21.356807092Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Dec 02 15:35:23 functional-748804 containerd[4479]: time="2025-12-02T15:35:23.599983941Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 02 15:35:23 functional-748804 containerd[4479]: time="2025-12-02T15:35:23.600003111Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11015"
	Dec 02 15:35:30 functional-748804 containerd[4479]: time="2025-12-02T15:35:30.728721002Z" level=info msg="PullImage \"kicbase/echo-server:latest\""
	Dec 02 15:35:32 functional-748804 containerd[4479]: time="2025-12-02T15:35:32.973088439Z" level=info msg="stop pulling image docker.io/kicbase/echo-server:latest: active requests=0, bytes read=10999"
	Dec 02 15:35:32 functional-748804 containerd[4479]: time="2025-12-02T15:35:32.973086888Z" level=error msg="PullImage \"kicbase/echo-server:latest\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 02 15:35:32 functional-748804 containerd[4479]: time="2025-12-02T15:35:32.974151515Z" level=info msg="PullImage \"docker.io/mysql:5.7\""
	Dec 02 15:35:35 functional-748804 containerd[4479]: time="2025-12-02T15:35:35.213521325Z" level=error msg="PullImage \"docker.io/mysql:5.7\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 02 15:35:35 functional-748804 containerd[4479]: time="2025-12-02T15:35:35.213539233Z" level=info msg="stop pulling image docker.io/library/mysql:5.7: active requests=0, bytes read=10967"
	Dec 02 15:35:35 functional-748804 containerd[4479]: time="2025-12-02T15:35:35.214567227Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Dec 02 15:35:37 functional-748804 containerd[4479]: time="2025-12-02T15:35:37.457420715Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 02 15:35:37 functional-748804 containerd[4479]: time="2025-12-02T15:35:37.457499756Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11046"
	Dec 02 15:35:39 functional-748804 containerd[4479]: time="2025-12-02T15:35:39.206853059Z" level=info msg="connecting to shim 9zoe6tth1yp0t0jjino2h5pwz" address="unix:///run/containerd/s/d3f28d56c80d12df3256392c31a09bf3a1a744181e49de49dbf5a689e0fd2e0f" namespace=k8s.io protocol=ttrpc version=3
	Dec 02 15:35:39 functional-748804 containerd[4479]: time="2025-12-02T15:35:39.291917036Z" level=info msg="shim disconnected" id=9zoe6tth1yp0t0jjino2h5pwz namespace=k8s.io
	Dec 02 15:35:39 functional-748804 containerd[4479]: time="2025-12-02T15:35:39.291966250Z" level=info msg="cleaning up after shim disconnected" id=9zoe6tth1yp0t0jjino2h5pwz namespace=k8s.io
	Dec 02 15:35:39 functional-748804 containerd[4479]: time="2025-12-02T15:35:39.291981767Z" level=info msg="cleaning up dead shim" id=9zoe6tth1yp0t0jjino2h5pwz namespace=k8s.io
	Dec 02 15:35:39 functional-748804 containerd[4479]: time="2025-12-02T15:35:39.461605762Z" level=info msg="ImageCreate event name:\"localhost/my-image:functional-748804\""
	Dec 02 15:35:39 functional-748804 containerd[4479]: time="2025-12-02T15:35:39.467054894Z" level=info msg="ImageCreate event name:\"sha256:05aaff044f3194fc9099fd22d049f4cd103d443abb7f9a4cbe8848f801a0682e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 15:35:39 functional-748804 containerd[4479]: time="2025-12-02T15:35:39.467560016Z" level=info msg="ImageUpdate event name:\"localhost/my-image:functional-748804\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 15:35:50 functional-748804 containerd[4479]: time="2025-12-02T15:35:50.730110324Z" level=info msg="PullImage \"docker.io/nginx:alpine\""
	Dec 02 15:35:52 functional-748804 containerd[4479]: time="2025-12-02T15:35:52.974412620Z" level=error msg="PullImage \"docker.io/nginx:alpine\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 02 15:35:52 functional-748804 containerd[4479]: time="2025-12-02T15:35:52.974450185Z" level=info msg="stop pulling image docker.io/library/nginx:alpine: active requests=0, bytes read=10966"
	
	
	==> coredns [467501febc847e467dc2b1bb7632a8f5e694cd7b2bfb6697262857352e0c72ca] <==
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:59950 - 10790 "HINFO IN 5936427929830372433.4033215274638678948. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.027399913s
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	
	
	==> coredns [b35dbd9505dcbba128c22f8c3b17f1dedfc5404d131acbfc9c2360bae30ebdd4] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:33602 - 24598 "HINFO IN 8541481598738272659.1752149788769955562. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.031231862s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-748804
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-748804
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689
	                    minikube.k8s.io/name=functional-748804
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T15_28_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 15:28:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-748804
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 15:39:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 15:36:11 +0000   Tue, 02 Dec 2025 15:28:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 15:36:11 +0000   Tue, 02 Dec 2025 15:28:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 15:36:11 +0000   Tue, 02 Dec 2025 15:28:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 15:36:11 +0000   Tue, 02 Dec 2025 15:28:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-748804
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                4f8df55b-decd-494b-acfb-0d7449c62078
	  Boot ID:                    54b7568c-9bf9-47f9-8d68-e36a3a33af00
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-5758569b79-n9fw4                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-9f67c86d4-45wbs            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-844cf969f6-tcjzr                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-7d764666f9-hbkc9                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-functional-748804                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-mr459                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-functional-748804              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-748804     200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-lcgn8                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-748804              100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-5565989548-8p6fj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-qvfrp          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  11m   node-controller  Node functional-748804 event: Registered Node functional-748804 in Controller
	  Normal  RegisteredNode  10m   node-controller  Node functional-748804 event: Registered Node functional-748804 in Controller
	
	
	==> dmesg <==
	[  +0.000026] ll header: 00000000: fa b3 b6 48 c7 62 7e 6c 17 68 cf 5d 08 00
	[  +2.047826] IPv4: martian source 10.109.87.195 from 192.168.49.2, on dev eth0
	[  +0.000029] ll header: 00000000: fa b3 b6 48 c7 62 7e 6c 17 68 cf 5d 08 00
	[Dec 2 15:35] IPv4: martian source 10.109.87.195 from 192.168.49.2, on dev eth0
	[  +0.000026] ll header: 00000000: fa b3 b6 48 c7 62 7e 6c 17 68 cf 5d 08 00
	[  +8.063519] IPv4: martian source 10.109.87.195 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: fa b3 b6 48 c7 62 7e 6c 17 68 cf 5d 08 00
	[ +12.324769] IPv4: martian source 10.109.87.195 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: fa b3 b6 48 c7 62 7e 6c 17 68 cf 5d 08 00
	[  +1.050503] IPv4: martian source 10.109.87.195 from 192.168.49.2, on dev eth0
	[  +0.000024] ll header: 00000000: fa b3 b6 48 c7 62 7e 6c 17 68 cf 5d 08 00
	[  +1.023897] IPv4: martian source 10.109.87.195 from 192.168.49.2, on dev eth0
	[  +0.000021] ll header: 00000000: fa b3 b6 48 c7 62 7e 6c 17 68 cf 5d 08 00
	[  +1.023943] IPv4: martian source 10.109.87.195 from 192.168.49.2, on dev eth0
	[  +0.000020] ll header: 00000000: fa b3 b6 48 c7 62 7e 6c 17 68 cf 5d 08 00
	[  +1.023953] IPv4: martian source 10.109.87.195 from 192.168.49.2, on dev eth0
	[  +0.000028] ll header: 00000000: fa b3 b6 48 c7 62 7e 6c 17 68 cf 5d 08 00
	[  +1.023878] IPv4: martian source 10.109.87.195 from 192.168.49.2, on dev eth0
	[  +0.000009] ll header: 00000000: fa b3 b6 48 c7 62 7e 6c 17 68 cf 5d 08 00
	[  +2.047890] IPv4: martian source 10.109.87.195 from 192.168.49.2, on dev eth0
	[  +0.000026] ll header: 00000000: fa b3 b6 48 c7 62 7e 6c 17 68 cf 5d 08 00
	[  +4.031799] IPv4: martian source 10.109.87.195 from 192.168.49.2, on dev eth0
	[  +0.000009] ll header: 00000000: fa b3 b6 48 c7 62 7e 6c 17 68 cf 5d 08 00
	[  +8.191554] IPv4: martian source 10.109.87.195 from 192.168.49.2, on dev eth0
	[  +0.000027] ll header: 00000000: fa b3 b6 48 c7 62 7e 6c 17 68 cf 5d 08 00
	
	
	==> etcd [0fd3f9a9df703f05808ad0be200c0376f9990b42a6e6db124573d8d8aea41d62] <==
	{"level":"warn","ts":"2025-12-02T15:28:01.705301Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:28:01.712264Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:28:01.719006Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:28:01.762879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:28:03.766021Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"107.359771ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/poddisruptionbudgets\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-02T15:28:03.766112Z","caller":"traceutil/trace.go:172","msg":"trace[687264165] range","detail":"{range_begin:/registry/poddisruptionbudgets; range_end:; response_count:0; response_revision:207; }","duration":"107.4911ms","start":"2025-12-02T15:28:03.658606Z","end":"2025-12-02T15:28:03.766097Z","steps":["trace[687264165] 'agreement among raft nodes before linearized reading'  (duration: 47.402237ms)","trace[687264165] 'range keys from in-memory index tree'  (duration: 59.913776ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-02T15:28:03.766198Z","caller":"traceutil/trace.go:172","msg":"trace[1479526730] transaction","detail":"{read_only:false; response_revision:208; number_of_response:1; }","duration":"108.02388ms","start":"2025-12-02T15:28:03.658152Z","end":"2025-12-02T15:28:03.766176Z","steps":["trace[1479526730] 'process raft request'  (duration: 47.910328ms)","trace[1479526730] 'compare'  (duration: 59.848277ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-02T15:29:00.675066Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-02T15:29:00.675150Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-748804","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-12-02T15:29:00.675262Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-02T15:29:00.676779Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-02T15:29:00.676839Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-02T15:29:00.676881Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-12-02T15:29:00.676941Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-02T15:29:00.676941Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-02T15:29:00.676967Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-02T15:29:00.677017Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-02T15:29:00.676987Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-02T15:29:00.677030Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-02T15:29:00.677045Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-02T15:29:00.677060Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-02T15:29:00.679245Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-12-02T15:29:00.679302Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-02T15:29:00.679344Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-12-02T15:29:00.679377Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-748804","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [133eb1abf8f5a0e3e8ce65d6e6ebf24893cd038f129c8a429e6545b040014e17] <==
	{"level":"warn","ts":"2025-12-02T15:29:02.992743Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.006763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.015100Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.022088Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.034320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.041243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.048661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.055653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.062574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.070913Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.077381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.084291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.091309Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.098135Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.104641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.112398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.119294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.135772Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.144860Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.152024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.158695Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.212894Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33332","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-02T15:39:02.692233Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1329}
	{"level":"info","ts":"2025-12-02T15:39:02.713724Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1329,"took":"21.143141ms","hash":278410728,"current-db-size-bytes":3932160,"current-db-size":"3.9 MB","current-db-size-in-use-bytes":2019328,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2025-12-02T15:39:02.713781Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":278410728,"revision":1329,"compact-revision":-1}
	
	
	==> kernel <==
	 15:39:39 up  2:22,  0 user,  load average: 0.11, 0.17, 0.40
	Linux functional-748804 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [511bcd0a11b99f5dc7b64a5fdb7f3344c73d85de581a14e9bb569220007ce972] <==
	I1202 15:37:31.447877       1 main.go:301] handling current node
	I1202 15:37:41.447047       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:37:41.447107       1 main.go:301] handling current node
	I1202 15:37:51.447501       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:37:51.447540       1 main.go:301] handling current node
	I1202 15:38:01.456392       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:38:01.456440       1 main.go:301] handling current node
	I1202 15:38:11.446943       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:38:11.447001       1 main.go:301] handling current node
	I1202 15:38:21.447523       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:38:21.447590       1 main.go:301] handling current node
	I1202 15:38:31.455705       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:38:31.455742       1 main.go:301] handling current node
	I1202 15:38:41.447416       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:38:41.447449       1 main.go:301] handling current node
	I1202 15:38:51.449578       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:38:51.449623       1 main.go:301] handling current node
	I1202 15:39:01.447648       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:39:01.447715       1 main.go:301] handling current node
	I1202 15:39:11.447774       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:39:11.447814       1 main.go:301] handling current node
	I1202 15:39:21.447632       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:39:21.447697       1 main.go:301] handling current node
	I1202 15:39:31.455732       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:39:31.455779       1 main.go:301] handling current node
	
	
	==> kindnet [71dd3db26040e5b2ca6139b6c4624cc876a85ee5da6a3af870c7bf7350b68965] <==
	I1202 15:28:13.964841       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1202 15:28:13.965136       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1202 15:28:13.965321       1 main.go:148] setting mtu 1500 for CNI 
	I1202 15:28:13.965340       1 main.go:178] kindnetd IP family: "ipv4"
	I1202 15:28:13.965372       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-02T15:28:14Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1202 15:28:14.167511       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1202 15:28:14.167537       1 controller.go:381] "Waiting for informer caches to sync"
	I1202 15:28:14.167547       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1202 15:28:14.262349       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1202 15:28:14.667734       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1202 15:28:14.667761       1 metrics.go:72] Registering metrics
	I1202 15:28:14.667822       1 controller.go:711] "Syncing nftables rules"
	I1202 15:28:24.167082       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:28:24.167201       1 main.go:301] handling current node
	I1202 15:28:34.174488       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:28:34.174527       1 main.go:301] handling current node
	I1202 15:28:44.170814       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:28:44.170903       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3b2e144de157e70de00d1b1ca9af127bb60c21bb6d622d6dbb8ac9301905bfde] <==
	I1202 15:29:03.673105       1 shared_informer.go:377] "Caches are synced"
	I1202 15:29:03.673180       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1202 15:29:03.673201       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1202 15:29:03.675016       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1202 15:29:03.677720       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1202 15:29:03.701547       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1202 15:29:03.805605       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1202 15:29:04.577690       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	W1202 15:29:04.883473       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1202 15:29:04.885014       1 controller.go:667] quota admission added evaluator for: endpoints
	I1202 15:29:04.891213       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1202 15:29:05.609614       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1202 15:29:05.721038       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1202 15:29:05.788248       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1202 15:29:05.795890       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1202 15:29:19.113146       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.96.196.24"}
	I1202 15:29:23.230433       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1202 15:29:23.341846       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.97.140.74"}
	I1202 15:29:26.872818       1 controller.go:667] quota admission added evaluator for: namespaces
	I1202 15:29:26.999224       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.234.80"}
	I1202 15:29:27.026547       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.234.122"}
	I1202 15:29:35.649127       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.108.249.51"}
	I1202 15:29:36.670994       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.99.145.72"}
	I1202 15:29:37.147891       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.96.99.226"}
	I1202 15:39:03.598803       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [5fd6717e2dd4392afbf6f9c8c694a9fb1e9b933d75da91e35b2a86060be7f451] <==
	I1202 15:29:06.800081       1 shared_informer.go:377] "Caches are synced"
	I1202 15:29:06.800122       1 shared_informer.go:377] "Caches are synced"
	I1202 15:29:06.799219       1 shared_informer.go:377] "Caches are synced"
	I1202 15:29:06.800334       1 shared_informer.go:377] "Caches are synced"
	I1202 15:29:06.800435       1 shared_informer.go:377] "Caches are synced"
	I1202 15:29:06.800465       1 shared_informer.go:377] "Caches are synced"
	I1202 15:29:06.800507       1 shared_informer.go:377] "Caches are synced"
	I1202 15:29:06.799366       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1202 15:29:06.800559       1 shared_informer.go:377] "Caches are synced"
	I1202 15:29:06.800812       1 shared_informer.go:377] "Caches are synced"
	I1202 15:29:06.800871       1 shared_informer.go:377] "Caches are synced"
	I1202 15:29:06.802642       1 shared_informer.go:377] "Caches are synced"
	I1202 15:29:06.804077       1 shared_informer.go:377] "Caches are synced"
	I1202 15:29:06.812453       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 15:29:06.900394       1 shared_informer.go:377] "Caches are synced"
	I1202 15:29:06.900417       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1202 15:29:06.900422       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1202 15:29:06.912983       1 shared_informer.go:377] "Caches are synced"
	E1202 15:29:26.926732       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:29:26.931750       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:29:26.935883       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:29:26.937394       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:29:26.941939       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:29:26.946876       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:29:26.946893       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [c61260d97419239c41be460f25d083080904cbb7018b647cf4294cbdcf2470b3] <==
	I1202 15:28:51.276093       1 serving.go:386] Generated self-signed cert in-memory
	I1202 15:28:51.283278       1 controllermanager.go:189] "Starting" version="v1.35.0-beta.0"
	I1202 15:28:51.283306       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 15:28:51.285100       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1202 15:28:51.285154       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1202 15:28:51.285318       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1202 15:28:51.285367       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1202 15:29:01.287242       1 controllermanager.go:250] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-proxy [2b69351c8a77e410a053982397818fcb33f606f013f50c531e760cd0be7136f5] <==
	I1202 15:28:51.085789       1 server_linux.go:53] "Using iptables proxy"
	I1202 15:28:51.158084       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 15:29:06.858497       1 shared_informer.go:377] "Caches are synced"
	I1202 15:29:06.858541       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1202 15:29:06.858650       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 15:29:06.882408       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 15:29:06.882468       1 server_linux.go:136] "Using iptables Proxier"
	I1202 15:29:06.888994       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 15:29:06.889421       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1202 15:29:06.889456       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 15:29:06.891000       1 config.go:200] "Starting service config controller"
	I1202 15:29:06.891018       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 15:29:06.891032       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 15:29:06.891038       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 15:29:06.891032       1 config.go:106] "Starting endpoint slice config controller"
	I1202 15:29:06.891050       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 15:29:06.891089       1 config.go:309] "Starting node config controller"
	I1202 15:29:06.891096       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 15:29:06.991280       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 15:29:06.991321       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1202 15:29:06.991348       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1202 15:29:06.991330       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-proxy [cbf183f96ad458cca4889f2d8498a70c350f894b4a6caf224691fa74849d8862] <==
	I1202 15:28:10.737057       1 server_linux.go:53] "Using iptables proxy"
	I1202 15:28:10.814644       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 15:28:10.915846       1 shared_informer.go:377] "Caches are synced"
	I1202 15:28:10.915896       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1202 15:28:10.916036       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 15:28:10.989739       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 15:28:10.989815       1 server_linux.go:136] "Using iptables Proxier"
	I1202 15:28:10.995599       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 15:28:10.996214       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1202 15:28:10.996241       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 15:28:10.998971       1 config.go:200] "Starting service config controller"
	I1202 15:28:10.999004       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 15:28:10.999026       1 config.go:106] "Starting endpoint slice config controller"
	I1202 15:28:10.999031       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 15:28:10.999046       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 15:28:10.999050       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 15:28:10.999197       1 config.go:309] "Starting node config controller"
	I1202 15:28:10.999231       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 15:28:10.999244       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 15:28:11.099278       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 15:28:11.099306       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1202 15:28:11.099314       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [180ffdb115bcf7d869265aab1df2cb6f33f07745c05d1b65de6c107ce8e2de1a] <==
	E1202 15:28:03.202206       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\""
	E1202 15:28:03.203345       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1202 15:28:03.217452       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope"
	E1202 15:28:03.218466       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1202 15:28:03.227657       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope"
	E1202 15:28:03.228764       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1202 15:28:03.251113       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope"
	E1202 15:28:03.252303       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1202 15:28:03.317017       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope"
	E1202 15:28:03.318198       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1202 15:28:03.352431       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope"
	E1202 15:28:03.353484       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1202 15:28:03.468006       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="namespaces is forbidden: User \"system:kube-scheduler\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope"
	E1202 15:28:03.469092       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1202 15:28:03.481689       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope"
	E1202 15:28:03.482819       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1202 15:28:03.494989       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope"
	E1202 15:28:03.495927       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	I1202 15:28:05.674207       1 shared_informer.go:377] "Caches are synced"
	I1202 15:28:50.457138       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1202 15:28:50.457038       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 15:28:50.457268       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1202 15:28:50.457302       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1202 15:28:50.457310       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1202 15:28:50.457335       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [a119c7112144b4151b97909036d18ec5009dba3db288ef99bc67561e90c3c78a] <==
	I1202 15:28:51.290257       1 serving.go:386] Generated self-signed cert in-memory
	W1202 15:28:51.292769       1 authentication.go:397] Error looking up in-cluster authentication configuration: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.49.2:8441: connect: connection refused
	W1202 15:28:51.292799       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1202 15:28:51.292808       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1202 15:28:51.300270       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1202 15:28:51.300302       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 15:28:51.302116       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 15:28:51.302153       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 15:28:51.302193       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1202 15:28:51.302473       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1202 15:29:11.803121       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 02 15:39:02 functional-748804 kubelet[5467]: E1202 15:39:02.729371    5467 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-qvfrp" podUID="1f9cac4a-1ab2-413d-9967-a023719d8122"
	Dec 02 15:39:03 functional-748804 kubelet[5467]: E1202 15:39:03.728550    5467 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-9f67c86d4-45wbs" podUID="266c903b-b1d9-4f00-bf97-7602875437e3"
	Dec 02 15:39:03 functional-748804 kubelet[5467]: E1202 15:39:03.729042    5467 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-tcjzr" podUID="6fdc785a-6a87-41d8-8b53-eea26c8c69a3"
	Dec 02 15:39:04 functional-748804 kubelet[5467]: E1202 15:39:04.727588    5467 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-8p6fj" containerName="dashboard-metrics-scraper"
	Dec 02 15:39:04 functional-748804 kubelet[5467]: E1202 15:39:04.728858    5467 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-8p6fj" podUID="ad96cdb8-fb98-4e24-9510-f4c48
3deb625"
	Dec 02 15:39:07 functional-748804 kubelet[5467]: E1202 15:39:07.728388    5467 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="5d5bfff0-32f2-46fe-97c6-6e057285303a"
	Dec 02 15:39:10 functional-748804 kubelet[5467]: E1202 15:39:10.728843    5467 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="b290365c-ec25-4d41-9827-3198e9a91a7c"
	Dec 02 15:39:11 functional-748804 kubelet[5467]: E1202 15:39:11.728779    5467 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-748804" containerName="kube-scheduler"
	Dec 02 15:39:15 functional-748804 kubelet[5467]: E1202 15:39:15.729726    5467 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-tcjzr" podUID="6fdc785a-6a87-41d8-8b53-eea26c8c69a3"
	Dec 02 15:39:17 functional-748804 kubelet[5467]: E1202 15:39:17.727911    5467 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-748804" containerName="kube-apiserver"
	Dec 02 15:39:17 functional-748804 kubelet[5467]: E1202 15:39:17.728077    5467 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-qvfrp" containerName="kubernetes-dashboard"
	Dec 02 15:39:17 functional-748804 kubelet[5467]: E1202 15:39:17.728652    5467 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-9f67c86d4-45wbs" podUID="266c903b-b1d9-4f00-bf97-7602875437e3"
	Dec 02 15:39:17 functional-748804 kubelet[5467]: E1202 15:39:17.729344    5467 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-qvfrp" podUID="1f9cac4a-1ab2-413d-9967-a023719d8122"
	Dec 02 15:39:18 functional-748804 kubelet[5467]: E1202 15:39:18.728863    5467 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="5d5bfff0-32f2-46fe-97c6-6e057285303a"
	Dec 02 15:39:19 functional-748804 kubelet[5467]: E1202 15:39:19.728002    5467 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-8p6fj" containerName="dashboard-metrics-scraper"
	Dec 02 15:39:19 functional-748804 kubelet[5467]: E1202 15:39:19.728196    5467 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-748804" containerName="kube-controller-manager"
	Dec 02 15:39:19 functional-748804 kubelet[5467]: E1202 15:39:19.729446    5467 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-8p6fj" podUID="ad96cdb8-fb98-4e24-9510-f4c48
3deb625"
	Dec 02 15:39:24 functional-748804 kubelet[5467]: E1202 15:39:24.729703    5467 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="b290365c-ec25-4d41-9827-3198e9a91a7c"
	Dec 02 15:39:29 functional-748804 kubelet[5467]: E1202 15:39:29.728887    5467 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-tcjzr" podUID="6fdc785a-6a87-41d8-8b53-eea26c8c69a3"
	Dec 02 15:39:30 functional-748804 kubelet[5467]: E1202 15:39:30.727952    5467 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-8p6fj" containerName="dashboard-metrics-scraper"
	Dec 02 15:39:30 functional-748804 kubelet[5467]: E1202 15:39:30.728477    5467 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-9f67c86d4-45wbs" podUID="266c903b-b1d9-4f00-bf97-7602875437e3"
	Dec 02 15:39:30 functional-748804 kubelet[5467]: E1202 15:39:30.728762    5467 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="5d5bfff0-32f2-46fe-97c6-6e057285303a"
	Dec 02 15:39:30 functional-748804 kubelet[5467]: E1202 15:39:30.729274    5467 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-8p6fj" podUID="ad96cdb8-fb98-4e24-9510-f4c48
3deb625"
	Dec 02 15:39:32 functional-748804 kubelet[5467]: E1202 15:39:32.728357    5467 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-qvfrp" containerName="kubernetes-dashboard"
	Dec 02 15:39:32 functional-748804 kubelet[5467]: E1202 15:39:32.729696    5467 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-qvfrp" podUID="1f9cac4a-1ab2-413d-9967-a023719d8122"
	
	
	==> storage-provisioner [3e94dbe1375f92abe0a40b96e91788d0ca512d5269e4554d385391ff28bad723] <==
	I1202 15:28:50.986751       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1202 15:28:50.990376       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [8c7bed2b3aa7476230e316a6c487c2dd5357a8d502b2c00552b544a3df23db7a] <==
	W1202 15:39:13.976704       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:39:15.980189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:39:15.984546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:39:17.988009       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:39:17.993734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:39:19.997334       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:39:20.001716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:39:22.004935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:39:22.009243       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:39:24.012182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:39:24.017286       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:39:26.020467       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:39:26.025016       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:39:28.032407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:39:28.036712       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:39:30.040657       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:39:30.046382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:39:32.049806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:39:32.055425       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:39:34.058878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:39:34.062829       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:39:36.066545       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:39:36.070892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:39:38.074207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:39:38.080422       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-748804 -n functional-748804
helpers_test.go:269: (dbg) Run:  kubectl --context functional-748804 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-connect-9f67c86d4-45wbs mysql-844cf969f6-tcjzr nginx-svc sp-pod dashboard-metrics-scraper-5565989548-8p6fj kubernetes-dashboard-b84665fb8-qvfrp
helpers_test.go:282: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-748804 describe pod busybox-mount hello-node-connect-9f67c86d4-45wbs mysql-844cf969f6-tcjzr nginx-svc sp-pod dashboard-metrics-scraper-5565989548-8p6fj kubernetes-dashboard-b84665fb8-qvfrp
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-748804 describe pod busybox-mount hello-node-connect-9f67c86d4-45wbs mysql-844cf969f6-tcjzr nginx-svc sp-pod dashboard-metrics-scraper-5565989548-8p6fj kubernetes-dashboard-b84665fb8-qvfrp: exit status 1 (102.050404ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-748804/192.168.49.2
	Start Time:       Tue, 02 Dec 2025 15:29:25 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  mount-munger:
	    Container ID:  containerd://8e4faaf1af6e0b08715571eee94bbdad1fdaac4808f24be0c5c93182c221596f
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Tue, 02 Dec 2025 15:29:28 +0000
	      Finished:     Tue, 02 Dec 2025 15:29:28 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jvm2g (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-jvm2g:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-748804
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.065s (2.066s including waiting). Image size: 2395207 bytes.
	  Normal  Created    10m   kubelet            Container created
	  Normal  Started    10m   kubelet            Container started
	
	
	Name:             hello-node-connect-9f67c86d4-45wbs
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-748804/192.168.49.2
	Start Time:       Tue, 02 Dec 2025 15:29:37 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=9f67c86d4
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:           10.244.0.11
	Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k7vx5 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-k7vx5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-45wbs to functional-748804
	  Normal   Pulling    6m56s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m53s (x5 over 9m56s)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   6m53s (x5 over 9m56s)   kubelet  Error: ErrImagePull
	  Warning  Failed   4m51s (x20 over 9m56s)  kubelet  Error: ImagePullBackOff
	  Normal   BackOff  4m37s (x21 over 9m56s)  kubelet  Back-off pulling image "kicbase/echo-server"
	
	
	Name:             mysql-844cf969f6-tcjzr
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-748804/192.168.49.2
	Start Time:       Tue, 02 Dec 2025 15:29:36 +0000
	Labels:           app=mysql
	                  pod-template-hash=844cf969f6
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:           10.244.0.10
	Controlled By:  ReplicaSet/mysql-844cf969f6
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-86lp4 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-86lp4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/mysql-844cf969f6-tcjzr to functional-748804
	  Normal   Pulling    7m (x5 over 10m)       kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     6m58s (x5 over 9m58s)  kubelet            Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   6m58s (x5 over 9m58s)   kubelet  Error: ErrImagePull
	  Warning  Failed   4m49s (x20 over 9m58s)  kubelet  Error: ImagePullBackOff
	  Normal   BackOff  4m35s (x21 over 9m58s)  kubelet  Back-off pulling image "docker.io/mysql:5.7"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-748804/192.168.49.2
	Start Time:       Tue, 02 Dec 2025 15:29:35 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z6ld8 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-z6ld8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/nginx-svc to functional-748804
	  Normal   Pulling    6m43s (x5 over 10m)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     6m41s (x5 over 10m)  kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   6m41s (x5 over 10m)   kubelet  Error: ErrImagePull
	  Warning  Failed   4m58s (x19 over 10m)  kubelet  Error: ImagePullBackOff
	  Normal   BackOff  1s (x40 over 10m)     kubelet  Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-748804/192.168.49.2
	Start Time:       Tue, 02 Dec 2025 15:29:35 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jzwzd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-jzwzd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/sp-pod to functional-748804
	  Warning  Failed     8m26s (x4 over 10m)  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed  7m3s (x5 over 10m)  kubelet  Error: ErrImagePull
	  Warning  Failed  7m3s                kubelet  Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   5m (x18 over 10m)     kubelet  Error: ImagePullBackOff
	  Normal   BackOff  4m33s (x20 over 10m)  kubelet  Back-off pulling image "docker.io/nginx"
	  Normal   Pulling  4m22s (x6 over 10m)   kubelet  Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5565989548-8p6fj" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-qvfrp" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-748804 describe pod busybox-mount hello-node-connect-9f67c86d4-45wbs mysql-844cf969f6-tcjzr nginx-svc sp-pod dashboard-metrics-scraper-5565989548-8p6fj kubernetes-dashboard-b84665fb8-qvfrp: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (603.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (368.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [aee307c5-73b5-4874-8278-afafa800f729] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004547681s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-748804 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-748804 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-748804 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-748804 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [5d5bfff0-32f2-46fe-97c6-6e057285303a] Pending
helpers_test.go:352: "sp-pod" [5d5bfff0-32f2-46fe-97c6-6e057285303a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:140: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 6m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-748804 -n functional-748804
functional_test_pvc_test.go:140: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-12-02 15:35:35.472781842 +0000 UTC m=+1572.835675339
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-748804 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-748804 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-748804/192.168.49.2
Start Time:       Tue, 02 Dec 2025 15:29:35 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:  10.244.0.8
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ErrImagePull
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jzwzd (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-jzwzd:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  6m                     default-scheduler  Successfully assigned default/sp-pod to functional-748804
Warning  Failed     4m21s (x4 over 5m58s)  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests
toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed  2m58s (x5 over 5m58s)  kubelet  Error: ErrImagePull
Warning  Failed  2m58s                  kubelet  Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541: 429 Too Many Requests
toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed   55s (x18 over 5m58s)  kubelet  Error: ImagePullBackOff
Normal   BackOff  28s (x20 over 5m58s)  kubelet  Back-off pulling image "docker.io/nginx"
Normal   Pulling  17s (x6 over 6m)      kubelet  Pulling image "docker.io/nginx"
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-748804 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) Non-zero exit: kubectl --context functional-748804 logs sp-pod -n default: exit status 1 (91.00605ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: image can't be pulled

                                                
                                                
** /stderr **
functional_test_pvc_test.go:140: kubectl --context functional-748804 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-748804
helpers_test.go:243: (dbg) docker inspect functional-748804:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6222e78e642133b36889ac3edf0ea2d5042d373f1833e7703e046f5ed14e24ac",
	        "Created": "2025-12-02T15:27:45.585626306Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 460868,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T15:27:45.624629841Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/6222e78e642133b36889ac3edf0ea2d5042d373f1833e7703e046f5ed14e24ac/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6222e78e642133b36889ac3edf0ea2d5042d373f1833e7703e046f5ed14e24ac/hostname",
	        "HostsPath": "/var/lib/docker/containers/6222e78e642133b36889ac3edf0ea2d5042d373f1833e7703e046f5ed14e24ac/hosts",
	        "LogPath": "/var/lib/docker/containers/6222e78e642133b36889ac3edf0ea2d5042d373f1833e7703e046f5ed14e24ac/6222e78e642133b36889ac3edf0ea2d5042d373f1833e7703e046f5ed14e24ac-json.log",
	        "Name": "/functional-748804",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-748804:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-748804",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6222e78e642133b36889ac3edf0ea2d5042d373f1833e7703e046f5ed14e24ac",
	                "LowerDir": "/var/lib/docker/overlay2/671e9c5d889f651edf5f697f181fc8d047a0da384c7d026f1c7810abd59bc372-init/diff:/var/lib/docker/overlay2/b24a03799b584404f04c044a7327612eb3ab66b1330d1bf57134456e5f41230d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/671e9c5d889f651edf5f697f181fc8d047a0da384c7d026f1c7810abd59bc372/merged",
	                "UpperDir": "/var/lib/docker/overlay2/671e9c5d889f651edf5f697f181fc8d047a0da384c7d026f1c7810abd59bc372/diff",
	                "WorkDir": "/var/lib/docker/overlay2/671e9c5d889f651edf5f697f181fc8d047a0da384c7d026f1c7810abd59bc372/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-748804",
	                "Source": "/var/lib/docker/volumes/functional-748804/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-748804",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-748804",
	                "name.minikube.sigs.k8s.io": "functional-748804",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "ef0274eff8a62ef04c066c3fb70728d1f398f09a7f7467a4dc6d4783d563c894",
	            "SandboxKey": "/var/run/docker/netns/ef0274eff8a6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33170"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33171"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33174"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33172"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33173"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-748804": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "45059eff2ea7711cb8d23ef3b916e84c51ac669d185a5efdcdc6c56158ffc5eb",
	                    "EndpointID": "3dc2c96d7473c5c48b900489a532766ff181ebe724b42e6cc7869e9199bcb6a2",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "fa:b3:b6:48:c7:62",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-748804",
	                        "6222e78e6421"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-748804 -n functional-748804
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-748804 logs -n 25: (1.462304251s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-748804 image ls                                                                                                           │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │ 02 Dec 25 15:29 UTC │
	│ service │ functional-748804 service --namespace=default --https --url hello-node                                                               │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │ 02 Dec 25 15:29 UTC │
	│ mount   │ -p functional-748804 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3072562562/001:/mount1 --alsologtostderr -v=1 │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │                     │
	│ mount   │ -p functional-748804 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3072562562/001:/mount3 --alsologtostderr -v=1 │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │                     │
	│ mount   │ -p functional-748804 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3072562562/001:/mount2 --alsologtostderr -v=1 │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │                     │
	│ ssh     │ functional-748804 ssh findmnt -T /mount1                                                                                             │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │                     │
	│ image   │ functional-748804 image save --daemon kicbase/echo-server:functional-748804 --alsologtostderr                                        │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │ 02 Dec 25 15:29 UTC │
	│ service │ functional-748804 service hello-node --url --format={{.IP}}                                                                          │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │ 02 Dec 25 15:29 UTC │
	│ ssh     │ functional-748804 ssh echo hello                                                                                                     │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │ 02 Dec 25 15:29 UTC │
	│ service │ functional-748804 service hello-node --url                                                                                           │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │ 02 Dec 25 15:29 UTC │
	│ ssh     │ functional-748804 ssh cat /etc/hostname                                                                                              │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │ 02 Dec 25 15:29 UTC │
	│ ssh     │ functional-748804 ssh findmnt -T /mount1                                                                                             │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │ 02 Dec 25 15:29 UTC │
	│ ssh     │ functional-748804 ssh sudo cat /etc/ssl/certs/406799.pem                                                                             │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │ 02 Dec 25 15:29 UTC │
	│ ssh     │ functional-748804 ssh findmnt -T /mount2                                                                                             │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │ 02 Dec 25 15:29 UTC │
	│ ssh     │ functional-748804 ssh sudo cat /usr/share/ca-certificates/406799.pem                                                                 │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │ 02 Dec 25 15:29 UTC │
	│ tunnel  │ functional-748804 tunnel --alsologtostderr                                                                                           │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │                     │
	│ ssh     │ functional-748804 ssh findmnt -T /mount3                                                                                             │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │ 02 Dec 25 15:29 UTC │
	│ ssh     │ functional-748804 ssh sudo cat /etc/ssl/certs/51391683.0                                                                             │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │ 02 Dec 25 15:29 UTC │
	│ mount   │ -p functional-748804 --kill=true                                                                                                     │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │                     │
	│ ssh     │ functional-748804 ssh sudo cat /etc/test/nested/copy/406799/hosts                                                                    │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │ 02 Dec 25 15:29 UTC │
	│ ssh     │ functional-748804 ssh sudo cat /etc/ssl/certs/4067992.pem                                                                            │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │ 02 Dec 25 15:29 UTC │
	│ addons  │ functional-748804 addons list                                                                                                        │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │ 02 Dec 25 15:29 UTC │
	│ addons  │ functional-748804 addons list -o json                                                                                                │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │ 02 Dec 25 15:29 UTC │
	│ ssh     │ functional-748804 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                             │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │ 02 Dec 25 15:29 UTC │
	│ image   │ functional-748804 image ls --format short --alsologtostderr                                                                          │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:35 UTC │ 02 Dec 25 15:35 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 15:29:26
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 15:29:26.139222  472486 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:29:26.139339  472486 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:29:26.139346  472486 out.go:374] Setting ErrFile to fd 2...
	I1202 15:29:26.139351  472486 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:29:26.139620  472486 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-403182/.minikube/bin
	I1202 15:29:26.140110  472486 out.go:368] Setting JSON to false
	I1202 15:29:26.141235  472486 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":7908,"bootTime":1764681458,"procs":232,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 15:29:26.141317  472486 start.go:143] virtualization: kvm guest
	I1202 15:29:26.143611  472486 out.go:179] * [functional-748804] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 15:29:26.145431  472486 notify.go:221] Checking for updates...
	I1202 15:29:26.145475  472486 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 15:29:26.147028  472486 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 15:29:26.148571  472486 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-403182/kubeconfig
	I1202 15:29:26.151226  472486 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-403182/.minikube
	I1202 15:29:26.152690  472486 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 15:29:26.153938  472486 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 15:29:26.155740  472486 config.go:182] Loaded profile config "functional-748804": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 15:29:26.156423  472486 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 15:29:26.185109  472486 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 15:29:26.185249  472486 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 15:29:26.258186  472486 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-02 15:29:26.246627139 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 15:29:26.258363  472486 docker.go:319] overlay module found
	I1202 15:29:26.260367  472486 out.go:179] * Using the docker driver based on existing profile
	I1202 15:29:26.262125  472486 start.go:309] selected driver: docker
	I1202 15:29:26.262148  472486 start.go:927] validating driver "docker" against &{Name:functional-748804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-748804 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 15:29:26.262256  472486 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 15:29:26.262347  472486 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 15:29:26.323640  472486 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-02 15:29:26.3130954 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 15:29:26.324500  472486 cni.go:84] Creating CNI manager for ""
	I1202 15:29:26.324578  472486 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 15:29:26.324624  472486 start.go:353] cluster config:
	{Name:functional-748804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-748804 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 15:29:26.327384  472486 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	8e4faaf1af6e0       56cc512116c8f       6 minutes ago       Exited              mount-munger              0                   8ef8320ca8748       busybox-mount                               default
	c26834edb0b97       9056ab77afb8e       6 minutes ago       Running             echo-server               0                   883b5dde25cab       hello-node-5758569b79-n9fw4                 default
	8c7bed2b3aa74       6e38f40d628db       6 minutes ago       Running             storage-provisioner       2                   7c72eb9f1b949       storage-provisioner                         kube-system
	3b2e144de157e       aa9d02839d8de       6 minutes ago       Running             kube-apiserver            0                   6c5ffab4d9199       kube-apiserver-functional-748804            kube-system
	5fd6717e2dd43       45f3cc72d235f       6 minutes ago       Running             kube-controller-manager   2                   e41f4d9e9066c       kube-controller-manager-functional-748804   kube-system
	133eb1abf8f5a       a3e246e9556e9       6 minutes ago       Running             etcd                      1                   71bf699e9dad7       etcd-functional-748804                      kube-system
	2b69351c8a77e       8a4ded35a3eb1       6 minutes ago       Running             kube-proxy                1                   2d0e156464bf6       kube-proxy-lcgn8                            kube-system
	c61260d974192       45f3cc72d235f       6 minutes ago       Exited              kube-controller-manager   1                   e41f4d9e9066c       kube-controller-manager-functional-748804   kube-system
	a119c7112144b       7bb6219ddab95       6 minutes ago       Running             kube-scheduler            1                   6e835981c9dcd       kube-scheduler-functional-748804            kube-system
	3e94dbe1375f9       6e38f40d628db       6 minutes ago       Exited              storage-provisioner       1                   7c72eb9f1b949       storage-provisioner                         kube-system
	467501febc847       aa5e3ebc0dfed       6 minutes ago       Running             coredns                   1                   6baefaaf14fbf       coredns-7d764666f9-hbkc9                    kube-system
	511bcd0a11b99       409467f978b4a       6 minutes ago       Running             kindnet-cni               1                   f5df6f632b2da       kindnet-mr459                               kube-system
	b35dbd9505dcb       aa5e3ebc0dfed       7 minutes ago       Exited              coredns                   0                   6baefaaf14fbf       coredns-7d764666f9-hbkc9                    kube-system
	71dd3db26040e       409467f978b4a       7 minutes ago       Exited              kindnet-cni               0                   f5df6f632b2da       kindnet-mr459                               kube-system
	cbf183f96ad45       8a4ded35a3eb1       7 minutes ago       Exited              kube-proxy                0                   2d0e156464bf6       kube-proxy-lcgn8                            kube-system
	180ffdb115bcf       7bb6219ddab95       7 minutes ago       Exited              kube-scheduler            0                   6e835981c9dcd       kube-scheduler-functional-748804            kube-system
	0fd3f9a9df703       a3e246e9556e9       7 minutes ago       Exited              etcd                      0                   71bf699e9dad7       etcd-functional-748804                      kube-system
	
	
	==> containerd <==
	Dec 02 15:34:28 functional-748804 containerd[4479]: time="2025-12-02T15:34:28.432012402Z" level=info msg="container event discarded" container=8e4faaf1af6e0b08715571eee94bbdad1fdaac4808f24be0c5c93182c221596f type=CONTAINER_STOPPED_EVENT
	Dec 02 15:34:29 functional-748804 containerd[4479]: time="2025-12-02T15:34:29.931458231Z" level=info msg="container event discarded" container=8ef8320ca87483d9b7bb5eb66029463fbfe83e56fac1ed2ede7f785279b2b16e type=CONTAINER_STOPPED_EVENT
	Dec 02 15:34:35 functional-748804 containerd[4479]: time="2025-12-02T15:34:35.589639071Z" level=info msg="container event discarded" container=c59c7d93ea64136a84463b2c1ed8bcad23f2861e2dd175e7427777017407a3ed type=CONTAINER_CREATED_EVENT
	Dec 02 15:34:35 functional-748804 containerd[4479]: time="2025-12-02T15:34:35.589765655Z" level=info msg="container event discarded" container=c59c7d93ea64136a84463b2c1ed8bcad23f2861e2dd175e7427777017407a3ed type=CONTAINER_STARTED_EVENT
	Dec 02 15:34:36 functional-748804 containerd[4479]: time="2025-12-02T15:34:36.079452864Z" level=info msg="container event discarded" container=528c3a7e01c96c157959e950dd820008d0e03d72baf0edd2e409d70f389087fb type=CONTAINER_CREATED_EVENT
	Dec 02 15:34:36 functional-748804 containerd[4479]: time="2025-12-02T15:34:36.079533759Z" level=info msg="container event discarded" container=528c3a7e01c96c157959e950dd820008d0e03d72baf0edd2e409d70f389087fb type=CONTAINER_STARTED_EVENT
	Dec 02 15:34:37 functional-748804 containerd[4479]: time="2025-12-02T15:34:37.173824183Z" level=info msg="container event discarded" container=f7af8874b4785979989f1e9230905b68ccad7ee1f8e2497e1a29a9692e974b8e type=CONTAINER_CREATED_EVENT
	Dec 02 15:34:37 functional-748804 containerd[4479]: time="2025-12-02T15:34:37.173881526Z" level=info msg="container event discarded" container=f7af8874b4785979989f1e9230905b68ccad7ee1f8e2497e1a29a9692e974b8e type=CONTAINER_STARTED_EVENT
	Dec 02 15:34:37 functional-748804 containerd[4479]: time="2025-12-02T15:34:37.507369477Z" level=info msg="container event discarded" container=42394714392a0dffed951d759bde952988dc4dca65b2a0d1dae16dda96b3144e type=CONTAINER_CREATED_EVENT
	Dec 02 15:34:37 functional-748804 containerd[4479]: time="2025-12-02T15:34:37.507426095Z" level=info msg="container event discarded" container=42394714392a0dffed951d759bde952988dc4dca65b2a0d1dae16dda96b3144e type=CONTAINER_STARTED_EVENT
	Dec 02 15:35:01 functional-748804 containerd[4479]: time="2025-12-02T15:35:01.746495653Z" level=info msg="container event discarded" container=8d289074880b62ba780bba4b642289c31767b077ba460a26d63d0a31fbcc082a type=CONTAINER_DELETED_EVENT
	Dec 02 15:35:01 functional-748804 containerd[4479]: time="2025-12-02T15:35:01.774286996Z" level=info msg="container event discarded" container=9bb69484289ce8e0db5a69aff6ba887816e76f64632b999d2c5f96d212cb179d type=CONTAINER_DELETED_EVENT
	Dec 02 15:35:18 functional-748804 containerd[4479]: time="2025-12-02T15:35:18.728762617Z" level=info msg="PullImage \"docker.io/nginx:latest\""
	Dec 02 15:35:21 functional-748804 containerd[4479]: time="2025-12-02T15:35:21.355916682Z" level=error msg="PullImage \"docker.io/nginx:latest\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541: 429 Too Many Requests\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 02 15:35:21 functional-748804 containerd[4479]: time="2025-12-02T15:35:21.355945565Z" level=info msg="stop pulling image docker.io/library/nginx:latest: active requests=0, bytes read=21196"
	Dec 02 15:35:21 functional-748804 containerd[4479]: time="2025-12-02T15:35:21.356807092Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Dec 02 15:35:23 functional-748804 containerd[4479]: time="2025-12-02T15:35:23.599983941Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 02 15:35:23 functional-748804 containerd[4479]: time="2025-12-02T15:35:23.600003111Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11015"
	Dec 02 15:35:30 functional-748804 containerd[4479]: time="2025-12-02T15:35:30.728721002Z" level=info msg="PullImage \"kicbase/echo-server:latest\""
	Dec 02 15:35:32 functional-748804 containerd[4479]: time="2025-12-02T15:35:32.973088439Z" level=info msg="stop pulling image docker.io/kicbase/echo-server:latest: active requests=0, bytes read=10999"
	Dec 02 15:35:32 functional-748804 containerd[4479]: time="2025-12-02T15:35:32.973086888Z" level=error msg="PullImage \"kicbase/echo-server:latest\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 02 15:35:32 functional-748804 containerd[4479]: time="2025-12-02T15:35:32.974151515Z" level=info msg="PullImage \"docker.io/mysql:5.7\""
	Dec 02 15:35:35 functional-748804 containerd[4479]: time="2025-12-02T15:35:35.213521325Z" level=error msg="PullImage \"docker.io/mysql:5.7\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 02 15:35:35 functional-748804 containerd[4479]: time="2025-12-02T15:35:35.213539233Z" level=info msg="stop pulling image docker.io/library/mysql:5.7: active requests=0, bytes read=10967"
	Dec 02 15:35:35 functional-748804 containerd[4479]: time="2025-12-02T15:35:35.214567227Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	
	
	==> coredns [467501febc847e467dc2b1bb7632a8f5e694cd7b2bfb6697262857352e0c72ca] <==
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:59950 - 10790 "HINFO IN 5936427929830372433.4033215274638678948. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.027399913s
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	
	
	==> coredns [b35dbd9505dcbba128c22f8c3b17f1dedfc5404d131acbfc9c2360bae30ebdd4] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:33602 - 24598 "HINFO IN 8541481598738272659.1752149788769955562. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.031231862s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-748804
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-748804
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689
	                    minikube.k8s.io/name=functional-748804
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T15_28_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 15:28:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-748804
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 15:35:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 15:34:39 +0000   Tue, 02 Dec 2025 15:28:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 15:34:39 +0000   Tue, 02 Dec 2025 15:28:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 15:34:39 +0000   Tue, 02 Dec 2025 15:28:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 15:34:39 +0000   Tue, 02 Dec 2025 15:28:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-748804
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                4f8df55b-decd-494b-acfb-0d7449c62078
	  Boot ID:                    54b7568c-9bf9-47f9-8d68-e36a3a33af00
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-5758569b79-n9fw4                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m13s
	  default                     hello-node-connect-9f67c86d4-45wbs            0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m59s
	  default                     mysql-844cf969f6-tcjzr                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     6m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  kube-system                 coredns-7d764666f9-hbkc9                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     7m26s
	  kube-system                 etcd-functional-748804                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7m32s
	  kube-system                 kindnet-mr459                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      7m26s
	  kube-system                 kube-apiserver-functional-748804              250m (3%)     0 (0%)      0 (0%)           0 (0%)         6m32s
	  kube-system                 kube-controller-manager-functional-748804     200m (2%)     0 (0%)      0 (0%)           0 (0%)         7m32s
	  kube-system                 kube-proxy-lcgn8                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m26s
	  kube-system                 kube-scheduler-functional-748804              100m (1%)     0 (0%)      0 (0%)           0 (0%)         7m32s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m26s
	  kubernetes-dashboard        dashboard-metrics-scraper-5565989548-8p6fj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m10s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-qvfrp          0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  7m28s  node-controller  Node functional-748804 event: Registered Node functional-748804 in Controller
	  Normal  RegisteredNode  6m30s  node-controller  Node functional-748804 event: Registered Node functional-748804 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: fa b3 b6 48 c7 62 7e 6c 17 68 cf 5d 08 00
	[  +1.023963] IPv4: martian source 10.109.87.195 from 192.168.49.2, on dev eth0
	[  +0.000026] ll header: 00000000: fa b3 b6 48 c7 62 7e 6c 17 68 cf 5d 08 00
	[  +2.047826] IPv4: martian source 10.109.87.195 from 192.168.49.2, on dev eth0
	[  +0.000029] ll header: 00000000: fa b3 b6 48 c7 62 7e 6c 17 68 cf 5d 08 00
	[Dec 2 15:35] IPv4: martian source 10.109.87.195 from 192.168.49.2, on dev eth0
	[  +0.000026] ll header: 00000000: fa b3 b6 48 c7 62 7e 6c 17 68 cf 5d 08 00
	[  +8.063519] IPv4: martian source 10.109.87.195 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: fa b3 b6 48 c7 62 7e 6c 17 68 cf 5d 08 00
	[ +12.324769] IPv4: martian source 10.109.87.195 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: fa b3 b6 48 c7 62 7e 6c 17 68 cf 5d 08 00
	[  +1.050503] IPv4: martian source 10.109.87.195 from 192.168.49.2, on dev eth0
	[  +0.000024] ll header: 00000000: fa b3 b6 48 c7 62 7e 6c 17 68 cf 5d 08 00
	[  +1.023897] IPv4: martian source 10.109.87.195 from 192.168.49.2, on dev eth0
	[  +0.000021] ll header: 00000000: fa b3 b6 48 c7 62 7e 6c 17 68 cf 5d 08 00
	[  +1.023943] IPv4: martian source 10.109.87.195 from 192.168.49.2, on dev eth0
	[  +0.000020] ll header: 00000000: fa b3 b6 48 c7 62 7e 6c 17 68 cf 5d 08 00
	[  +1.023953] IPv4: martian source 10.109.87.195 from 192.168.49.2, on dev eth0
	[  +0.000028] ll header: 00000000: fa b3 b6 48 c7 62 7e 6c 17 68 cf 5d 08 00
	[  +1.023878] IPv4: martian source 10.109.87.195 from 192.168.49.2, on dev eth0
	[  +0.000009] ll header: 00000000: fa b3 b6 48 c7 62 7e 6c 17 68 cf 5d 08 00
	[  +2.047890] IPv4: martian source 10.109.87.195 from 192.168.49.2, on dev eth0
	[  +0.000026] ll header: 00000000: fa b3 b6 48 c7 62 7e 6c 17 68 cf 5d 08 00
	[  +4.031799] IPv4: martian source 10.109.87.195 from 192.168.49.2, on dev eth0
	[  +0.000009] ll header: 00000000: fa b3 b6 48 c7 62 7e 6c 17 68 cf 5d 08 00
	
	
	==> etcd [0fd3f9a9df703f05808ad0be200c0376f9990b42a6e6db124573d8d8aea41d62] <==
	{"level":"warn","ts":"2025-12-02T15:28:01.705301Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:28:01.712264Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:28:01.719006Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:28:01.762879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:28:03.766021Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"107.359771ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/poddisruptionbudgets\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-02T15:28:03.766112Z","caller":"traceutil/trace.go:172","msg":"trace[687264165] range","detail":"{range_begin:/registry/poddisruptionbudgets; range_end:; response_count:0; response_revision:207; }","duration":"107.4911ms","start":"2025-12-02T15:28:03.658606Z","end":"2025-12-02T15:28:03.766097Z","steps":["trace[687264165] 'agreement among raft nodes before linearized reading'  (duration: 47.402237ms)","trace[687264165] 'range keys from in-memory index tree'  (duration: 59.913776ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-02T15:28:03.766198Z","caller":"traceutil/trace.go:172","msg":"trace[1479526730] transaction","detail":"{read_only:false; response_revision:208; number_of_response:1; }","duration":"108.02388ms","start":"2025-12-02T15:28:03.658152Z","end":"2025-12-02T15:28:03.766176Z","steps":["trace[1479526730] 'process raft request'  (duration: 47.910328ms)","trace[1479526730] 'compare'  (duration: 59.848277ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-02T15:29:00.675066Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-02T15:29:00.675150Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-748804","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-12-02T15:29:00.675262Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-02T15:29:00.676779Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-02T15:29:00.676839Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-02T15:29:00.676881Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-12-02T15:29:00.676941Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-02T15:29:00.676941Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-02T15:29:00.676967Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-02T15:29:00.677017Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-02T15:29:00.676987Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-02T15:29:00.677030Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-02T15:29:00.677045Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-02T15:29:00.677060Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-02T15:29:00.679245Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-12-02T15:29:00.679302Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-02T15:29:00.679344Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-12-02T15:29:00.679377Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-748804","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [133eb1abf8f5a0e3e8ce65d6e6ebf24893cd038f129c8a429e6545b040014e17] <==
	{"level":"warn","ts":"2025-12-02T15:29:02.967009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:02.974297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:02.984117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:02.992743Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.006763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.015100Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.022088Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.034320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.041243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.048661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.055653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.062574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.070913Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.077381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.084291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.091309Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.098135Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.104641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.112398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.119294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.135772Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.144860Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.152024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.158695Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.212894Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33332","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 15:35:37 up  2:17,  0 user,  load average: 0.36, 0.28, 0.50
	Linux functional-748804 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [511bcd0a11b99f5dc7b64a5fdb7f3344c73d85de581a14e9bb569220007ce972] <==
	I1202 15:33:31.450750       1 main.go:301] handling current node
	I1202 15:33:41.447589       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:33:41.447632       1 main.go:301] handling current node
	I1202 15:33:51.455779       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:33:51.455816       1 main.go:301] handling current node
	I1202 15:34:01.451823       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:34:01.451859       1 main.go:301] handling current node
	I1202 15:34:11.447349       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:34:11.447389       1 main.go:301] handling current node
	I1202 15:34:21.450845       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:34:21.450910       1 main.go:301] handling current node
	I1202 15:34:31.451471       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:34:31.451507       1 main.go:301] handling current node
	I1202 15:34:41.447970       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:34:41.448020       1 main.go:301] handling current node
	I1202 15:34:51.456090       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:34:51.456133       1 main.go:301] handling current node
	I1202 15:35:01.456350       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:35:01.456387       1 main.go:301] handling current node
	I1202 15:35:11.447645       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:35:11.447721       1 main.go:301] handling current node
	I1202 15:35:21.447262       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:35:21.447304       1 main.go:301] handling current node
	I1202 15:35:31.456294       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:35:31.456330       1 main.go:301] handling current node
	
	
	==> kindnet [71dd3db26040e5b2ca6139b6c4624cc876a85ee5da6a3af870c7bf7350b68965] <==
	I1202 15:28:13.964841       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1202 15:28:13.965136       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1202 15:28:13.965321       1 main.go:148] setting mtu 1500 for CNI 
	I1202 15:28:13.965340       1 main.go:178] kindnetd IP family: "ipv4"
	I1202 15:28:13.965372       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-02T15:28:14Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1202 15:28:14.167511       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1202 15:28:14.167537       1 controller.go:381] "Waiting for informer caches to sync"
	I1202 15:28:14.167547       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1202 15:28:14.262349       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1202 15:28:14.667734       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1202 15:28:14.667761       1 metrics.go:72] Registering metrics
	I1202 15:28:14.667822       1 controller.go:711] "Syncing nftables rules"
	I1202 15:28:24.167082       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:28:24.167201       1 main.go:301] handling current node
	I1202 15:28:34.174488       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:28:34.174527       1 main.go:301] handling current node
	I1202 15:28:44.170814       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:28:44.170903       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3b2e144de157e70de00d1b1ca9af127bb60c21bb6d622d6dbb8ac9301905bfde] <==
	I1202 15:29:03.672888       1 cache.go:39] Caches are synced for autoregister controller
	I1202 15:29:03.673105       1 shared_informer.go:377] "Caches are synced"
	I1202 15:29:03.673180       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1202 15:29:03.673201       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1202 15:29:03.675016       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1202 15:29:03.677720       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1202 15:29:03.701547       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1202 15:29:03.805605       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1202 15:29:04.577690       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	W1202 15:29:04.883473       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1202 15:29:04.885014       1 controller.go:667] quota admission added evaluator for: endpoints
	I1202 15:29:04.891213       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1202 15:29:05.609614       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1202 15:29:05.721038       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1202 15:29:05.788248       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1202 15:29:05.795890       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1202 15:29:19.113146       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.96.196.24"}
	I1202 15:29:23.230433       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1202 15:29:23.341846       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.97.140.74"}
	I1202 15:29:26.872818       1 controller.go:667] quota admission added evaluator for: namespaces
	I1202 15:29:26.999224       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.234.80"}
	I1202 15:29:27.026547       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.234.122"}
	I1202 15:29:35.649127       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.108.249.51"}
	I1202 15:29:36.670994       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.99.145.72"}
	I1202 15:29:37.147891       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.96.99.226"}
	
	
	==> kube-controller-manager [5fd6717e2dd4392afbf6f9c8c694a9fb1e9b933d75da91e35b2a86060be7f451] <==
	I1202 15:29:06.800081       1 shared_informer.go:377] "Caches are synced"
	I1202 15:29:06.800122       1 shared_informer.go:377] "Caches are synced"
	I1202 15:29:06.799219       1 shared_informer.go:377] "Caches are synced"
	I1202 15:29:06.800334       1 shared_informer.go:377] "Caches are synced"
	I1202 15:29:06.800435       1 shared_informer.go:377] "Caches are synced"
	I1202 15:29:06.800465       1 shared_informer.go:377] "Caches are synced"
	I1202 15:29:06.800507       1 shared_informer.go:377] "Caches are synced"
	I1202 15:29:06.799366       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1202 15:29:06.800559       1 shared_informer.go:377] "Caches are synced"
	I1202 15:29:06.800812       1 shared_informer.go:377] "Caches are synced"
	I1202 15:29:06.800871       1 shared_informer.go:377] "Caches are synced"
	I1202 15:29:06.802642       1 shared_informer.go:377] "Caches are synced"
	I1202 15:29:06.804077       1 shared_informer.go:377] "Caches are synced"
	I1202 15:29:06.812453       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 15:29:06.900394       1 shared_informer.go:377] "Caches are synced"
	I1202 15:29:06.900417       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1202 15:29:06.900422       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1202 15:29:06.912983       1 shared_informer.go:377] "Caches are synced"
	E1202 15:29:26.926732       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:29:26.931750       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:29:26.935883       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:29:26.937394       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:29:26.941939       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:29:26.946876       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:29:26.946893       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [c61260d97419239c41be460f25d083080904cbb7018b647cf4294cbdcf2470b3] <==
	I1202 15:28:51.276093       1 serving.go:386] Generated self-signed cert in-memory
	I1202 15:28:51.283278       1 controllermanager.go:189] "Starting" version="v1.35.0-beta.0"
	I1202 15:28:51.283306       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 15:28:51.285100       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1202 15:28:51.285154       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1202 15:28:51.285318       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1202 15:28:51.285367       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1202 15:29:01.287242       1 controllermanager.go:250] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-proxy [2b69351c8a77e410a053982397818fcb33f606f013f50c531e760cd0be7136f5] <==
	I1202 15:28:51.085789       1 server_linux.go:53] "Using iptables proxy"
	I1202 15:28:51.158084       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 15:29:06.858497       1 shared_informer.go:377] "Caches are synced"
	I1202 15:29:06.858541       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1202 15:29:06.858650       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 15:29:06.882408       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 15:29:06.882468       1 server_linux.go:136] "Using iptables Proxier"
	I1202 15:29:06.888994       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 15:29:06.889421       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1202 15:29:06.889456       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 15:29:06.891000       1 config.go:200] "Starting service config controller"
	I1202 15:29:06.891018       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 15:29:06.891032       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 15:29:06.891038       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 15:29:06.891032       1 config.go:106] "Starting endpoint slice config controller"
	I1202 15:29:06.891050       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 15:29:06.891089       1 config.go:309] "Starting node config controller"
	I1202 15:29:06.891096       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 15:29:06.991280       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 15:29:06.991321       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1202 15:29:06.991348       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1202 15:29:06.991330       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-proxy [cbf183f96ad458cca4889f2d8498a70c350f894b4a6caf224691fa74849d8862] <==
	I1202 15:28:10.737057       1 server_linux.go:53] "Using iptables proxy"
	I1202 15:28:10.814644       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 15:28:10.915846       1 shared_informer.go:377] "Caches are synced"
	I1202 15:28:10.915896       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1202 15:28:10.916036       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 15:28:10.989739       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 15:28:10.989815       1 server_linux.go:136] "Using iptables Proxier"
	I1202 15:28:10.995599       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 15:28:10.996214       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1202 15:28:10.996241       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 15:28:10.998971       1 config.go:200] "Starting service config controller"
	I1202 15:28:10.999004       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 15:28:10.999026       1 config.go:106] "Starting endpoint slice config controller"
	I1202 15:28:10.999031       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 15:28:10.999046       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 15:28:10.999050       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 15:28:10.999197       1 config.go:309] "Starting node config controller"
	I1202 15:28:10.999231       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 15:28:10.999244       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 15:28:11.099278       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 15:28:11.099306       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1202 15:28:11.099314       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [180ffdb115bcf7d869265aab1df2cb6f33f07745c05d1b65de6c107ce8e2de1a] <==
	E1202 15:28:03.202206       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\""
	E1202 15:28:03.203345       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1202 15:28:03.217452       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope"
	E1202 15:28:03.218466       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1202 15:28:03.227657       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope"
	E1202 15:28:03.228764       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1202 15:28:03.251113       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope"
	E1202 15:28:03.252303       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1202 15:28:03.317017       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope"
	E1202 15:28:03.318198       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1202 15:28:03.352431       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope"
	E1202 15:28:03.353484       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1202 15:28:03.468006       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="namespaces is forbidden: User \"system:kube-scheduler\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope"
	E1202 15:28:03.469092       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1202 15:28:03.481689       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope"
	E1202 15:28:03.482819       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1202 15:28:03.494989       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope"
	E1202 15:28:03.495927       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	I1202 15:28:05.674207       1 shared_informer.go:377] "Caches are synced"
	I1202 15:28:50.457138       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1202 15:28:50.457038       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 15:28:50.457268       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1202 15:28:50.457302       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1202 15:28:50.457310       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1202 15:28:50.457335       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [a119c7112144b4151b97909036d18ec5009dba3db288ef99bc67561e90c3c78a] <==
	I1202 15:28:51.290257       1 serving.go:386] Generated self-signed cert in-memory
	W1202 15:28:51.292769       1 authentication.go:397] Error looking up in-cluster authentication configuration: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.49.2:8441: connect: connection refused
	W1202 15:28:51.292799       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1202 15:28:51.292808       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1202 15:28:51.300270       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1202 15:28:51.300302       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 15:28:51.302116       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 15:28:51.302153       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 15:28:51.302193       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1202 15:28:51.302473       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1202 15:29:11.803121       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 02 15:35:32 functional-748804 kubelet[5467]:  > image="kicbase/echo-server:latest"
	Dec 02 15:35:32 functional-748804 kubelet[5467]: E1202 15:35:32.974034    5467 kuberuntime_manager.go:1664] "Unhandled Error" err=<
	Dec 02 15:35:32 functional-748804 kubelet[5467]:         container echo-server start failed in pod hello-node-connect-9f67c86d4-45wbs_default(266c903b-b1d9-4f00-bf97-7602875437e3): ErrImagePull: failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests
	Dec 02 15:35:32 functional-748804 kubelet[5467]:         toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	Dec 02 15:35:32 functional-748804 kubelet[5467]:  > logger="UnhandledError"
	Dec 02 15:35:32 functional-748804 kubelet[5467]: E1202 15:35:32.974089    5467 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-9f67c86d4-45wbs" podUID="266c903b-b1d9-4f00-bf97-7602875437e3"
	Dec 02 15:35:34 functional-748804 kubelet[5467]: E1202 15:35:34.727695    5467 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-8p6fj" containerName="dashboard-metrics-scraper"
	Dec 02 15:35:34 functional-748804 kubelet[5467]: E1202 15:35:34.728472    5467 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="5d5bfff0-32f2-46fe-97c6-6e057285303a"
	Dec 02 15:35:35 functional-748804 kubelet[5467]: E1202 15:35:35.213976    5467 log.go:32] "PullImage from image service failed" err=<
	Dec 02 15:35:35 functional-748804 kubelet[5467]:         rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests
	Dec 02 15:35:35 functional-748804 kubelet[5467]:         toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	Dec 02 15:35:35 functional-748804 kubelet[5467]:  > image="docker.io/mysql:5.7"
	Dec 02 15:35:35 functional-748804 kubelet[5467]: E1202 15:35:35.214030    5467 kuberuntime_image.go:43] "Failed to pull image" err=<
	Dec 02 15:35:35 functional-748804 kubelet[5467]:         failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests
	Dec 02 15:35:35 functional-748804 kubelet[5467]:         toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	Dec 02 15:35:35 functional-748804 kubelet[5467]:  > image="docker.io/mysql:5.7"
	Dec 02 15:35:35 functional-748804 kubelet[5467]: E1202 15:35:35.214401    5467 kuberuntime_manager.go:1664] "Unhandled Error" err=<
	Dec 02 15:35:35 functional-748804 kubelet[5467]:         container mysql start failed in pod mysql-844cf969f6-tcjzr_default(6fdc785a-6a87-41d8-8b53-eea26c8c69a3): ErrImagePull: failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests
	Dec 02 15:35:35 functional-748804 kubelet[5467]:         toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	Dec 02 15:35:35 functional-748804 kubelet[5467]:  > logger="UnhandledError"
	Dec 02 15:35:35 functional-748804 kubelet[5467]: E1202 15:35:35.214459    5467 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-tcjzr" podUID="6fdc785a-6a87-41d8-8b53-eea26c8c69a3"
	Dec 02 15:35:36 functional-748804 kubelet[5467]: E1202 15:35:36.728558    5467 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-qvfrp" containerName="kubernetes-dashboard"
	Dec 02 15:35:36 functional-748804 kubelet[5467]: E1202 15:35:36.728835    5467 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-748804" containerName="kube-controller-manager"
	Dec 02 15:35:36 functional-748804 kubelet[5467]: E1202 15:35:36.729750    5467 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="b290365c-ec25-4d41-9827-3198e9a91a7c"
	Dec 02 15:35:36 functional-748804 kubelet[5467]: E1202 15:35:36.730566    5467 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-qvfrp" podUID="1f9cac4a-1ab2-413d-9967-a023719d8122"
	
	
	==> storage-provisioner [3e94dbe1375f92abe0a40b96e91788d0ca512d5269e4554d385391ff28bad723] <==
	I1202 15:28:50.986751       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1202 15:28:50.990376       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [8c7bed2b3aa7476230e316a6c487c2dd5357a8d502b2c00552b544a3df23db7a] <==
	W1202 15:35:12.981199       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:35:14.985168       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:35:14.991215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:35:16.994725       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:35:16.999465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:35:19.002606       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:35:19.008569       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:35:21.011931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:35:21.016699       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:35:23.020064       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:35:23.025793       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:35:25.029379       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:35:25.033894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:35:27.037284       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:35:27.041565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:35:29.045501       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:35:29.051300       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:35:31.054480       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:35:31.058864       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:35:33.062472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:35:33.067258       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:35:35.070119       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:35:35.074532       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:35:37.078862       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:35:37.084064       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-748804 -n functional-748804
helpers_test.go:269: (dbg) Run:  kubectl --context functional-748804 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-connect-9f67c86d4-45wbs mysql-844cf969f6-tcjzr nginx-svc sp-pod dashboard-metrics-scraper-5565989548-8p6fj kubernetes-dashboard-b84665fb8-qvfrp
helpers_test.go:282: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-748804 describe pod busybox-mount hello-node-connect-9f67c86d4-45wbs mysql-844cf969f6-tcjzr nginx-svc sp-pod dashboard-metrics-scraper-5565989548-8p6fj kubernetes-dashboard-b84665fb8-qvfrp
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-748804 describe pod busybox-mount hello-node-connect-9f67c86d4-45wbs mysql-844cf969f6-tcjzr nginx-svc sp-pod dashboard-metrics-scraper-5565989548-8p6fj kubernetes-dashboard-b84665fb8-qvfrp: exit status 1 (123.589624ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-748804/192.168.49.2
	Start Time:       Tue, 02 Dec 2025 15:29:25 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  mount-munger:
	    Container ID:  containerd://8e4faaf1af6e0b08715571eee94bbdad1fdaac4808f24be0c5c93182c221596f
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Tue, 02 Dec 2025 15:29:28 +0000
	      Finished:     Tue, 02 Dec 2025 15:29:28 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jvm2g (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-jvm2g:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  6m13s  default-scheduler  Successfully assigned default/busybox-mount to functional-748804
	  Normal  Pulling    6m12s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     6m10s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.065s (2.066s including waiting). Image size: 2395207 bytes.
	  Normal  Created    6m10s  kubelet            Container created
	  Normal  Started    6m10s  kubelet            Container started
	
	
	Name:             hello-node-connect-9f67c86d4-45wbs
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-748804/192.168.49.2
	Start Time:       Tue, 02 Dec 2025 15:29:37 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=9f67c86d4
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:           10.244.0.11
	Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k7vx5 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-k7vx5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  6m1s                   default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-45wbs to functional-748804
	  Normal   Pulling    2m54s (x5 over 6m1s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     2m51s (x5 over 5m54s)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   2m51s (x5 over 5m54s)  kubelet  Error: ErrImagePull
	  Warning  Failed   49s (x20 over 5m54s)   kubelet  Error: ImagePullBackOff
	  Normal   BackOff  35s (x21 over 5m54s)   kubelet  Back-off pulling image "kicbase/echo-server"
	
	
	Name:             mysql-844cf969f6-tcjzr
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-748804/192.168.49.2
	Start Time:       Tue, 02 Dec 2025 15:29:36 +0000
	Labels:           app=mysql
	                  pod-template-hash=844cf969f6
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:           10.244.0.10
	Controlled By:  ReplicaSet/mysql-844cf969f6
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-86lp4 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-86lp4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  6m2s                   default-scheduler  Successfully assigned default/mysql-844cf969f6-tcjzr to functional-748804
	  Normal   Pulling    2m58s (x5 over 6m1s)   kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     2m56s (x5 over 5m56s)  kubelet            Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   2m56s (x5 over 5m56s)  kubelet  Error: ErrImagePull
	  Warning  Failed   47s (x20 over 5m56s)   kubelet  Error: ImagePullBackOff
	  Normal   BackOff  33s (x21 over 5m56s)   kubelet  Back-off pulling image "docker.io/mysql:5.7"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-748804/192.168.49.2
	Start Time:       Tue, 02 Dec 2025 15:29:35 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z6ld8 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-z6ld8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  6m3s                   default-scheduler  Successfully assigned default/nginx-svc to functional-748804
	  Normal   Pulling    2m41s (x5 over 6m2s)   kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     2m39s (x5 over 5m58s)  kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   2m39s (x5 over 5m58s)  kubelet  Error: ErrImagePull
	  Warning  Failed   56s (x19 over 5m58s)   kubelet  Error: ImagePullBackOff
	  Normal   BackOff  30s (x21 over 5m58s)   kubelet  Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-748804/192.168.49.2
	Start Time:       Tue, 02 Dec 2025 15:29:35 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jzwzd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-jzwzd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m3s                  default-scheduler  Successfully assigned default/sp-pod to functional-748804
	  Warning  Failed     4m24s (x4 over 6m1s)  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed  3m1s (x5 over 6m1s)  kubelet  Error: ErrImagePull
	  Warning  Failed  3m1s                 kubelet  Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   58s (x18 over 6m1s)  kubelet  Error: ImagePullBackOff
	  Normal   BackOff  31s (x20 over 6m1s)  kubelet  Back-off pulling image "docker.io/nginx"
	  Normal   Pulling  20s (x6 over 6m3s)   kubelet  Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5565989548-8p6fj" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-qvfrp" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-748804 describe pod busybox-mount hello-node-connect-9f67c86d4-45wbs mysql-844cf969f6-tcjzr nginx-svc sp-pod dashboard-metrics-scraper-5565989548-8p6fj kubernetes-dashboard-b84665fb8-qvfrp: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (368.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (603.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-748804 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-844cf969f6-tcjzr" [6fdc785a-6a87-41d8-8b53-eea26c8c69a3] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
E1202 15:31:48.211630  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/addons-371602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:32:15.603557  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/functional-031973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:32:15.609969  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/functional-031973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:32:15.621420  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/functional-031973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:32:15.642904  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/functional-031973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:32:15.684289  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/functional-031973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:32:15.765869  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/functional-031973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:32:15.927521  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/functional-031973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:32:16.249628  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/functional-031973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:32:16.892005  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/functional-031973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:32:18.174346  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/functional-031973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:32:20.736646  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/functional-031973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:32:25.858502  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/functional-031973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:32:36.100494  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/functional-031973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:32:56.581867  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/functional-031973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:33:11.284038  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/addons-371602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: WARNING: pod list for "default" "app=mysql" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1804: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1804: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-748804 -n functional-748804
functional_test.go:1804: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: showing logs for failed pods as of 2025-12-02 15:39:37.06262121 +0000 UTC m=+1814.425514697
functional_test.go:1804: (dbg) Run:  kubectl --context functional-748804 describe po mysql-844cf969f6-tcjzr -n default
functional_test.go:1804: (dbg) kubectl --context functional-748804 describe po mysql-844cf969f6-tcjzr -n default:
Name:             mysql-844cf969f6-tcjzr
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-748804/192.168.49.2
Start Time:       Tue, 02 Dec 2025 15:29:36 +0000
Labels:           app=mysql
pod-template-hash=844cf969f6
Annotations:      <none>
Status:           Pending
IP:               10.244.0.10
IPs:
IP:           10.244.0.10
Controlled By:  ReplicaSet/mysql-844cf969f6
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP (mysql)
Host Port:      0/TCP (mysql)
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-86lp4 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-86lp4:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/mysql-844cf969f6-tcjzr to functional-748804
Normal   Pulling    6m57s (x5 over 10m)    kubelet            Pulling image "docker.io/mysql:5.7"
Warning  Failed     6m55s (x5 over 9m55s)  kubelet            Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests
toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed   6m55s (x5 over 9m55s)   kubelet  Error: ErrImagePull
Warning  Failed   4m46s (x20 over 9m55s)  kubelet  Error: ImagePullBackOff
Normal   BackOff  4m32s (x21 over 9m55s)  kubelet  Back-off pulling image "docker.io/mysql:5.7"
functional_test.go:1804: (dbg) Run:  kubectl --context functional-748804 logs mysql-844cf969f6-tcjzr -n default
functional_test.go:1804: (dbg) Non-zero exit: kubectl --context functional-748804 logs mysql-844cf969f6-tcjzr -n default: exit status 1 (83.143098ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-844cf969f6-tcjzr" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1804: kubectl --context functional-748804 logs mysql-844cf969f6-tcjzr -n default: exit status 1
functional_test.go:1806: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-748804
helpers_test.go:243: (dbg) docker inspect functional-748804:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6222e78e642133b36889ac3edf0ea2d5042d373f1833e7703e046f5ed14e24ac",
	        "Created": "2025-12-02T15:27:45.585626306Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 460868,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T15:27:45.624629841Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/6222e78e642133b36889ac3edf0ea2d5042d373f1833e7703e046f5ed14e24ac/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6222e78e642133b36889ac3edf0ea2d5042d373f1833e7703e046f5ed14e24ac/hostname",
	        "HostsPath": "/var/lib/docker/containers/6222e78e642133b36889ac3edf0ea2d5042d373f1833e7703e046f5ed14e24ac/hosts",
	        "LogPath": "/var/lib/docker/containers/6222e78e642133b36889ac3edf0ea2d5042d373f1833e7703e046f5ed14e24ac/6222e78e642133b36889ac3edf0ea2d5042d373f1833e7703e046f5ed14e24ac-json.log",
	        "Name": "/functional-748804",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-748804:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-748804",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6222e78e642133b36889ac3edf0ea2d5042d373f1833e7703e046f5ed14e24ac",
	                "LowerDir": "/var/lib/docker/overlay2/671e9c5d889f651edf5f697f181fc8d047a0da384c7d026f1c7810abd59bc372-init/diff:/var/lib/docker/overlay2/b24a03799b584404f04c044a7327612eb3ab66b1330d1bf57134456e5f41230d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/671e9c5d889f651edf5f697f181fc8d047a0da384c7d026f1c7810abd59bc372/merged",
	                "UpperDir": "/var/lib/docker/overlay2/671e9c5d889f651edf5f697f181fc8d047a0da384c7d026f1c7810abd59bc372/diff",
	                "WorkDir": "/var/lib/docker/overlay2/671e9c5d889f651edf5f697f181fc8d047a0da384c7d026f1c7810abd59bc372/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-748804",
	                "Source": "/var/lib/docker/volumes/functional-748804/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-748804",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-748804",
	                "name.minikube.sigs.k8s.io": "functional-748804",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "ef0274eff8a62ef04c066c3fb70728d1f398f09a7f7467a4dc6d4783d563c894",
	            "SandboxKey": "/var/run/docker/netns/ef0274eff8a6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33170"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33171"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33174"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33172"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33173"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-748804": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "45059eff2ea7711cb8d23ef3b916e84c51ac669d185a5efdcdc6c56158ffc5eb",
	                    "EndpointID": "3dc2c96d7473c5c48b900489a532766ff181ebe724b42e6cc7869e9199bcb6a2",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "fa:b3:b6:48:c7:62",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-748804",
	                        "6222e78e6421"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-748804 -n functional-748804
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-748804 logs -n 25: (1.425649903s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                  ARGS                                                  │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service        │ functional-748804 service hello-node --url                                                             │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │ 02 Dec 25 15:29 UTC │
	│ ssh            │ functional-748804 ssh cat /etc/hostname                                                                │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │ 02 Dec 25 15:29 UTC │
	│ ssh            │ functional-748804 ssh findmnt -T /mount1                                                               │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │ 02 Dec 25 15:29 UTC │
	│ ssh            │ functional-748804 ssh sudo cat /etc/ssl/certs/406799.pem                                               │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │ 02 Dec 25 15:29 UTC │
	│ ssh            │ functional-748804 ssh findmnt -T /mount2                                                               │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │ 02 Dec 25 15:29 UTC │
	│ ssh            │ functional-748804 ssh sudo cat /usr/share/ca-certificates/406799.pem                                   │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │ 02 Dec 25 15:29 UTC │
	│ tunnel         │ functional-748804 tunnel --alsologtostderr                                                             │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │                     │
	│ ssh            │ functional-748804 ssh findmnt -T /mount3                                                               │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │ 02 Dec 25 15:29 UTC │
	│ ssh            │ functional-748804 ssh sudo cat /etc/ssl/certs/51391683.0                                               │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │ 02 Dec 25 15:29 UTC │
	│ mount          │ -p functional-748804 --kill=true                                                                       │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │                     │
	│ ssh            │ functional-748804 ssh sudo cat /etc/test/nested/copy/406799/hosts                                      │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │ 02 Dec 25 15:29 UTC │
	│ ssh            │ functional-748804 ssh sudo cat /etc/ssl/certs/4067992.pem                                              │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │ 02 Dec 25 15:29 UTC │
	│ addons         │ functional-748804 addons list                                                                          │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │ 02 Dec 25 15:29 UTC │
	│ addons         │ functional-748804 addons list -o json                                                                  │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │ 02 Dec 25 15:29 UTC │
	│ ssh            │ functional-748804 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                               │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:29 UTC │ 02 Dec 25 15:29 UTC │
	│ image          │ functional-748804 image ls --format short --alsologtostderr                                            │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:35 UTC │ 02 Dec 25 15:35 UTC │
	│ image          │ functional-748804 image ls --format yaml --alsologtostderr                                             │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:35 UTC │ 02 Dec 25 15:35 UTC │
	│ ssh            │ functional-748804 ssh pgrep buildkitd                                                                  │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:35 UTC │                     │
	│ image          │ functional-748804 image build -t localhost/my-image:functional-748804 testdata/build --alsologtostderr │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:35 UTC │ 02 Dec 25 15:35 UTC │
	│ image          │ functional-748804 image ls --format json --alsologtostderr                                             │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:35 UTC │ 02 Dec 25 15:35 UTC │
	│ image          │ functional-748804 image ls --format table --alsologtostderr                                            │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:35 UTC │ 02 Dec 25 15:35 UTC │
	│ update-context │ functional-748804 update-context --alsologtostderr -v=2                                                │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:35 UTC │ 02 Dec 25 15:35 UTC │
	│ update-context │ functional-748804 update-context --alsologtostderr -v=2                                                │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:35 UTC │ 02 Dec 25 15:35 UTC │
	│ update-context │ functional-748804 update-context --alsologtostderr -v=2                                                │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:35 UTC │ 02 Dec 25 15:35 UTC │
	│ image          │ functional-748804 image ls                                                                             │ functional-748804 │ jenkins │ v1.37.0 │ 02 Dec 25 15:35 UTC │ 02 Dec 25 15:35 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 15:29:26
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 15:29:26.139222  472486 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:29:26.139339  472486 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:29:26.139346  472486 out.go:374] Setting ErrFile to fd 2...
	I1202 15:29:26.139351  472486 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:29:26.139620  472486 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-403182/.minikube/bin
	I1202 15:29:26.140110  472486 out.go:368] Setting JSON to false
	I1202 15:29:26.141235  472486 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":7908,"bootTime":1764681458,"procs":232,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 15:29:26.141317  472486 start.go:143] virtualization: kvm guest
	I1202 15:29:26.143611  472486 out.go:179] * [functional-748804] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 15:29:26.145431  472486 notify.go:221] Checking for updates...
	I1202 15:29:26.145475  472486 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 15:29:26.147028  472486 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 15:29:26.148571  472486 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-403182/kubeconfig
	I1202 15:29:26.151226  472486 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-403182/.minikube
	I1202 15:29:26.152690  472486 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 15:29:26.153938  472486 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 15:29:26.155740  472486 config.go:182] Loaded profile config "functional-748804": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 15:29:26.156423  472486 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 15:29:26.185109  472486 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 15:29:26.185249  472486 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 15:29:26.258186  472486 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-02 15:29:26.246627139 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 15:29:26.258363  472486 docker.go:319] overlay module found
	I1202 15:29:26.260367  472486 out.go:179] * Using the docker driver based on existing profile
	I1202 15:29:26.262125  472486 start.go:309] selected driver: docker
	I1202 15:29:26.262148  472486 start.go:927] validating driver "docker" against &{Name:functional-748804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-748804 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 15:29:26.262256  472486 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 15:29:26.262347  472486 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 15:29:26.323640  472486 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-02 15:29:26.3130954 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 15:29:26.324500  472486 cni.go:84] Creating CNI manager for ""
	I1202 15:29:26.324578  472486 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 15:29:26.324624  472486 start.go:353] cluster config:
	{Name:functional-748804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-748804 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 15:29:26.327384  472486 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	8e4faaf1af6e0       56cc512116c8f       10 minutes ago      Exited              mount-munger              0                   8ef8320ca8748       busybox-mount                               default
	c26834edb0b97       9056ab77afb8e       10 minutes ago      Running             echo-server               0                   883b5dde25cab       hello-node-5758569b79-n9fw4                 default
	8c7bed2b3aa74       6e38f40d628db       10 minutes ago      Running             storage-provisioner       2                   7c72eb9f1b949       storage-provisioner                         kube-system
	3b2e144de157e       aa9d02839d8de       10 minutes ago      Running             kube-apiserver            0                   6c5ffab4d9199       kube-apiserver-functional-748804            kube-system
	5fd6717e2dd43       45f3cc72d235f       10 minutes ago      Running             kube-controller-manager   2                   e41f4d9e9066c       kube-controller-manager-functional-748804   kube-system
	133eb1abf8f5a       a3e246e9556e9       10 minutes ago      Running             etcd                      1                   71bf699e9dad7       etcd-functional-748804                      kube-system
	2b69351c8a77e       8a4ded35a3eb1       10 minutes ago      Running             kube-proxy                1                   2d0e156464bf6       kube-proxy-lcgn8                            kube-system
	c61260d974192       45f3cc72d235f       10 minutes ago      Exited              kube-controller-manager   1                   e41f4d9e9066c       kube-controller-manager-functional-748804   kube-system
	a119c7112144b       7bb6219ddab95       10 minutes ago      Running             kube-scheduler            1                   6e835981c9dcd       kube-scheduler-functional-748804            kube-system
	3e94dbe1375f9       6e38f40d628db       10 minutes ago      Exited              storage-provisioner       1                   7c72eb9f1b949       storage-provisioner                         kube-system
	467501febc847       aa5e3ebc0dfed       10 minutes ago      Running             coredns                   1                   6baefaaf14fbf       coredns-7d764666f9-hbkc9                    kube-system
	511bcd0a11b99       409467f978b4a       10 minutes ago      Running             kindnet-cni               1                   f5df6f632b2da       kindnet-mr459                               kube-system
	b35dbd9505dcb       aa5e3ebc0dfed       11 minutes ago      Exited              coredns                   0                   6baefaaf14fbf       coredns-7d764666f9-hbkc9                    kube-system
	71dd3db26040e       409467f978b4a       11 minutes ago      Exited              kindnet-cni               0                   f5df6f632b2da       kindnet-mr459                               kube-system
	cbf183f96ad45       8a4ded35a3eb1       11 minutes ago      Exited              kube-proxy                0                   2d0e156464bf6       kube-proxy-lcgn8                            kube-system
	180ffdb115bcf       7bb6219ddab95       11 minutes ago      Exited              kube-scheduler            0                   6e835981c9dcd       kube-scheduler-functional-748804            kube-system
	0fd3f9a9df703       a3e246e9556e9       11 minutes ago      Exited              etcd                      0                   71bf699e9dad7       etcd-functional-748804                      kube-system
	
	
	==> containerd <==
	Dec 02 15:35:18 functional-748804 containerd[4479]: time="2025-12-02T15:35:18.728762617Z" level=info msg="PullImage \"docker.io/nginx:latest\""
	Dec 02 15:35:21 functional-748804 containerd[4479]: time="2025-12-02T15:35:21.355916682Z" level=error msg="PullImage \"docker.io/nginx:latest\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541: 429 Too Many Requests\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 02 15:35:21 functional-748804 containerd[4479]: time="2025-12-02T15:35:21.355945565Z" level=info msg="stop pulling image docker.io/library/nginx:latest: active requests=0, bytes read=21196"
	Dec 02 15:35:21 functional-748804 containerd[4479]: time="2025-12-02T15:35:21.356807092Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Dec 02 15:35:23 functional-748804 containerd[4479]: time="2025-12-02T15:35:23.599983941Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 02 15:35:23 functional-748804 containerd[4479]: time="2025-12-02T15:35:23.600003111Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11015"
	Dec 02 15:35:30 functional-748804 containerd[4479]: time="2025-12-02T15:35:30.728721002Z" level=info msg="PullImage \"kicbase/echo-server:latest\""
	Dec 02 15:35:32 functional-748804 containerd[4479]: time="2025-12-02T15:35:32.973088439Z" level=info msg="stop pulling image docker.io/kicbase/echo-server:latest: active requests=0, bytes read=10999"
	Dec 02 15:35:32 functional-748804 containerd[4479]: time="2025-12-02T15:35:32.973086888Z" level=error msg="PullImage \"kicbase/echo-server:latest\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 02 15:35:32 functional-748804 containerd[4479]: time="2025-12-02T15:35:32.974151515Z" level=info msg="PullImage \"docker.io/mysql:5.7\""
	Dec 02 15:35:35 functional-748804 containerd[4479]: time="2025-12-02T15:35:35.213521325Z" level=error msg="PullImage \"docker.io/mysql:5.7\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 02 15:35:35 functional-748804 containerd[4479]: time="2025-12-02T15:35:35.213539233Z" level=info msg="stop pulling image docker.io/library/mysql:5.7: active requests=0, bytes read=10967"
	Dec 02 15:35:35 functional-748804 containerd[4479]: time="2025-12-02T15:35:35.214567227Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Dec 02 15:35:37 functional-748804 containerd[4479]: time="2025-12-02T15:35:37.457420715Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 02 15:35:37 functional-748804 containerd[4479]: time="2025-12-02T15:35:37.457499756Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11046"
	Dec 02 15:35:39 functional-748804 containerd[4479]: time="2025-12-02T15:35:39.206853059Z" level=info msg="connecting to shim 9zoe6tth1yp0t0jjino2h5pwz" address="unix:///run/containerd/s/d3f28d56c80d12df3256392c31a09bf3a1a744181e49de49dbf5a689e0fd2e0f" namespace=k8s.io protocol=ttrpc version=3
	Dec 02 15:35:39 functional-748804 containerd[4479]: time="2025-12-02T15:35:39.291917036Z" level=info msg="shim disconnected" id=9zoe6tth1yp0t0jjino2h5pwz namespace=k8s.io
	Dec 02 15:35:39 functional-748804 containerd[4479]: time="2025-12-02T15:35:39.291966250Z" level=info msg="cleaning up after shim disconnected" id=9zoe6tth1yp0t0jjino2h5pwz namespace=k8s.io
	Dec 02 15:35:39 functional-748804 containerd[4479]: time="2025-12-02T15:35:39.291981767Z" level=info msg="cleaning up dead shim" id=9zoe6tth1yp0t0jjino2h5pwz namespace=k8s.io
	Dec 02 15:35:39 functional-748804 containerd[4479]: time="2025-12-02T15:35:39.461605762Z" level=info msg="ImageCreate event name:\"localhost/my-image:functional-748804\""
	Dec 02 15:35:39 functional-748804 containerd[4479]: time="2025-12-02T15:35:39.467054894Z" level=info msg="ImageCreate event name:\"sha256:05aaff044f3194fc9099fd22d049f4cd103d443abb7f9a4cbe8848f801a0682e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 15:35:39 functional-748804 containerd[4479]: time="2025-12-02T15:35:39.467560016Z" level=info msg="ImageUpdate event name:\"localhost/my-image:functional-748804\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 15:35:50 functional-748804 containerd[4479]: time="2025-12-02T15:35:50.730110324Z" level=info msg="PullImage \"docker.io/nginx:alpine\""
	Dec 02 15:35:52 functional-748804 containerd[4479]: time="2025-12-02T15:35:52.974412620Z" level=error msg="PullImage \"docker.io/nginx:alpine\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 02 15:35:52 functional-748804 containerd[4479]: time="2025-12-02T15:35:52.974450185Z" level=info msg="stop pulling image docker.io/library/nginx:alpine: active requests=0, bytes read=10966"
	
	
	==> coredns [467501febc847e467dc2b1bb7632a8f5e694cd7b2bfb6697262857352e0c72ca] <==
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:59950 - 10790 "HINFO IN 5936427929830372433.4033215274638678948. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.027399913s
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	
	
	==> coredns [b35dbd9505dcbba128c22f8c3b17f1dedfc5404d131acbfc9c2360bae30ebdd4] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:33602 - 24598 "HINFO IN 8541481598738272659.1752149788769955562. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.031231862s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-748804
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-748804
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689
	                    minikube.k8s.io/name=functional-748804
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T15_28_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 15:28:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-748804
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 15:39:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 15:36:11 +0000   Tue, 02 Dec 2025 15:28:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 15:36:11 +0000   Tue, 02 Dec 2025 15:28:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 15:36:11 +0000   Tue, 02 Dec 2025 15:28:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 15:36:11 +0000   Tue, 02 Dec 2025 15:28:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-748804
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                4f8df55b-decd-494b-acfb-0d7449c62078
	  Boot ID:                    54b7568c-9bf9-47f9-8d68-e36a3a33af00
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-5758569b79-n9fw4                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-9f67c86d4-45wbs            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-844cf969f6-tcjzr                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-7d764666f9-hbkc9                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-functional-748804                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-mr459                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-functional-748804              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-748804     200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-lcgn8                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-748804              100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-5565989548-8p6fj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-qvfrp          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  11m   node-controller  Node functional-748804 event: Registered Node functional-748804 in Controller
	  Normal  RegisteredNode  10m   node-controller  Node functional-748804 event: Registered Node functional-748804 in Controller
	
	
	==> dmesg <==
	[  +0.000026] ll header: 00000000: fa b3 b6 48 c7 62 7e 6c 17 68 cf 5d 08 00
	[  +2.047826] IPv4: martian source 10.109.87.195 from 192.168.49.2, on dev eth0
	[  +0.000029] ll header: 00000000: fa b3 b6 48 c7 62 7e 6c 17 68 cf 5d 08 00
	[Dec 2 15:35] IPv4: martian source 10.109.87.195 from 192.168.49.2, on dev eth0
	[  +0.000026] ll header: 00000000: fa b3 b6 48 c7 62 7e 6c 17 68 cf 5d 08 00
	[  +8.063519] IPv4: martian source 10.109.87.195 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: fa b3 b6 48 c7 62 7e 6c 17 68 cf 5d 08 00
	[ +12.324769] IPv4: martian source 10.109.87.195 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: fa b3 b6 48 c7 62 7e 6c 17 68 cf 5d 08 00
	[  +1.050503] IPv4: martian source 10.109.87.195 from 192.168.49.2, on dev eth0
	[  +0.000024] ll header: 00000000: fa b3 b6 48 c7 62 7e 6c 17 68 cf 5d 08 00
	[  +1.023897] IPv4: martian source 10.109.87.195 from 192.168.49.2, on dev eth0
	[  +0.000021] ll header: 00000000: fa b3 b6 48 c7 62 7e 6c 17 68 cf 5d 08 00
	[  +1.023943] IPv4: martian source 10.109.87.195 from 192.168.49.2, on dev eth0
	[  +0.000020] ll header: 00000000: fa b3 b6 48 c7 62 7e 6c 17 68 cf 5d 08 00
	[  +1.023953] IPv4: martian source 10.109.87.195 from 192.168.49.2, on dev eth0
	[  +0.000028] ll header: 00000000: fa b3 b6 48 c7 62 7e 6c 17 68 cf 5d 08 00
	[  +1.023878] IPv4: martian source 10.109.87.195 from 192.168.49.2, on dev eth0
	[  +0.000009] ll header: 00000000: fa b3 b6 48 c7 62 7e 6c 17 68 cf 5d 08 00
	[  +2.047890] IPv4: martian source 10.109.87.195 from 192.168.49.2, on dev eth0
	[  +0.000026] ll header: 00000000: fa b3 b6 48 c7 62 7e 6c 17 68 cf 5d 08 00
	[  +4.031799] IPv4: martian source 10.109.87.195 from 192.168.49.2, on dev eth0
	[  +0.000009] ll header: 00000000: fa b3 b6 48 c7 62 7e 6c 17 68 cf 5d 08 00
	[  +8.191554] IPv4: martian source 10.109.87.195 from 192.168.49.2, on dev eth0
	[  +0.000027] ll header: 00000000: fa b3 b6 48 c7 62 7e 6c 17 68 cf 5d 08 00
	
	
	==> etcd [0fd3f9a9df703f05808ad0be200c0376f9990b42a6e6db124573d8d8aea41d62] <==
	{"level":"warn","ts":"2025-12-02T15:28:01.705301Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:28:01.712264Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:28:01.719006Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:28:01.762879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:28:03.766021Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"107.359771ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/poddisruptionbudgets\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-02T15:28:03.766112Z","caller":"traceutil/trace.go:172","msg":"trace[687264165] range","detail":"{range_begin:/registry/poddisruptionbudgets; range_end:; response_count:0; response_revision:207; }","duration":"107.4911ms","start":"2025-12-02T15:28:03.658606Z","end":"2025-12-02T15:28:03.766097Z","steps":["trace[687264165] 'agreement among raft nodes before linearized reading'  (duration: 47.402237ms)","trace[687264165] 'range keys from in-memory index tree'  (duration: 59.913776ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-02T15:28:03.766198Z","caller":"traceutil/trace.go:172","msg":"trace[1479526730] transaction","detail":"{read_only:false; response_revision:208; number_of_response:1; }","duration":"108.02388ms","start":"2025-12-02T15:28:03.658152Z","end":"2025-12-02T15:28:03.766176Z","steps":["trace[1479526730] 'process raft request'  (duration: 47.910328ms)","trace[1479526730] 'compare'  (duration: 59.848277ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-02T15:29:00.675066Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-02T15:29:00.675150Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-748804","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-12-02T15:29:00.675262Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-02T15:29:00.676779Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-02T15:29:00.676839Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-02T15:29:00.676881Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-12-02T15:29:00.676941Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-02T15:29:00.676941Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-02T15:29:00.676967Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-02T15:29:00.677017Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-02T15:29:00.676987Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-02T15:29:00.677030Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-02T15:29:00.677045Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-02T15:29:00.677060Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-02T15:29:00.679245Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-12-02T15:29:00.679302Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-02T15:29:00.679344Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-12-02T15:29:00.679377Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-748804","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [133eb1abf8f5a0e3e8ce65d6e6ebf24893cd038f129c8a429e6545b040014e17] <==
	{"level":"warn","ts":"2025-12-02T15:29:02.992743Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.006763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.015100Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.022088Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.034320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.041243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.048661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.055653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.062574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.070913Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.077381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.084291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.091309Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.098135Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.104641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.112398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.119294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.135772Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.144860Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.152024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.158695Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:29:03.212894Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33332","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-02T15:39:02.692233Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1329}
	{"level":"info","ts":"2025-12-02T15:39:02.713724Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1329,"took":"21.143141ms","hash":278410728,"current-db-size-bytes":3932160,"current-db-size":"3.9 MB","current-db-size-in-use-bytes":2019328,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2025-12-02T15:39:02.713781Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":278410728,"revision":1329,"compact-revision":-1}
	
	
	==> kernel <==
	 15:39:38 up  2:22,  0 user,  load average: 0.11, 0.17, 0.40
	Linux functional-748804 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [511bcd0a11b99f5dc7b64a5fdb7f3344c73d85de581a14e9bb569220007ce972] <==
	I1202 15:37:31.447877       1 main.go:301] handling current node
	I1202 15:37:41.447047       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:37:41.447107       1 main.go:301] handling current node
	I1202 15:37:51.447501       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:37:51.447540       1 main.go:301] handling current node
	I1202 15:38:01.456392       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:38:01.456440       1 main.go:301] handling current node
	I1202 15:38:11.446943       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:38:11.447001       1 main.go:301] handling current node
	I1202 15:38:21.447523       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:38:21.447590       1 main.go:301] handling current node
	I1202 15:38:31.455705       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:38:31.455742       1 main.go:301] handling current node
	I1202 15:38:41.447416       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:38:41.447449       1 main.go:301] handling current node
	I1202 15:38:51.449578       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:38:51.449623       1 main.go:301] handling current node
	I1202 15:39:01.447648       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:39:01.447715       1 main.go:301] handling current node
	I1202 15:39:11.447774       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:39:11.447814       1 main.go:301] handling current node
	I1202 15:39:21.447632       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:39:21.447697       1 main.go:301] handling current node
	I1202 15:39:31.455732       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:39:31.455779       1 main.go:301] handling current node
	
	
	==> kindnet [71dd3db26040e5b2ca6139b6c4624cc876a85ee5da6a3af870c7bf7350b68965] <==
	I1202 15:28:13.964841       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1202 15:28:13.965136       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1202 15:28:13.965321       1 main.go:148] setting mtu 1500 for CNI 
	I1202 15:28:13.965340       1 main.go:178] kindnetd IP family: "ipv4"
	I1202 15:28:13.965372       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-02T15:28:14Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1202 15:28:14.167511       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1202 15:28:14.167537       1 controller.go:381] "Waiting for informer caches to sync"
	I1202 15:28:14.167547       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1202 15:28:14.262349       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1202 15:28:14.667734       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1202 15:28:14.667761       1 metrics.go:72] Registering metrics
	I1202 15:28:14.667822       1 controller.go:711] "Syncing nftables rules"
	I1202 15:28:24.167082       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:28:24.167201       1 main.go:301] handling current node
	I1202 15:28:34.174488       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:28:34.174527       1 main.go:301] handling current node
	I1202 15:28:44.170814       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 15:28:44.170903       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3b2e144de157e70de00d1b1ca9af127bb60c21bb6d622d6dbb8ac9301905bfde] <==
	I1202 15:29:03.673105       1 shared_informer.go:377] "Caches are synced"
	I1202 15:29:03.673180       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1202 15:29:03.673201       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1202 15:29:03.675016       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1202 15:29:03.677720       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1202 15:29:03.701547       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1202 15:29:03.805605       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1202 15:29:04.577690       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	W1202 15:29:04.883473       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1202 15:29:04.885014       1 controller.go:667] quota admission added evaluator for: endpoints
	I1202 15:29:04.891213       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1202 15:29:05.609614       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1202 15:29:05.721038       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1202 15:29:05.788248       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1202 15:29:05.795890       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1202 15:29:19.113146       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.96.196.24"}
	I1202 15:29:23.230433       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1202 15:29:23.341846       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.97.140.74"}
	I1202 15:29:26.872818       1 controller.go:667] quota admission added evaluator for: namespaces
	I1202 15:29:26.999224       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.234.80"}
	I1202 15:29:27.026547       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.234.122"}
	I1202 15:29:35.649127       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.108.249.51"}
	I1202 15:29:36.670994       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.99.145.72"}
	I1202 15:29:37.147891       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.96.99.226"}
	I1202 15:39:03.598803       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [5fd6717e2dd4392afbf6f9c8c694a9fb1e9b933d75da91e35b2a86060be7f451] <==
	I1202 15:29:06.800081       1 shared_informer.go:377] "Caches are synced"
	I1202 15:29:06.800122       1 shared_informer.go:377] "Caches are synced"
	I1202 15:29:06.799219       1 shared_informer.go:377] "Caches are synced"
	I1202 15:29:06.800334       1 shared_informer.go:377] "Caches are synced"
	I1202 15:29:06.800435       1 shared_informer.go:377] "Caches are synced"
	I1202 15:29:06.800465       1 shared_informer.go:377] "Caches are synced"
	I1202 15:29:06.800507       1 shared_informer.go:377] "Caches are synced"
	I1202 15:29:06.799366       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1202 15:29:06.800559       1 shared_informer.go:377] "Caches are synced"
	I1202 15:29:06.800812       1 shared_informer.go:377] "Caches are synced"
	I1202 15:29:06.800871       1 shared_informer.go:377] "Caches are synced"
	I1202 15:29:06.802642       1 shared_informer.go:377] "Caches are synced"
	I1202 15:29:06.804077       1 shared_informer.go:377] "Caches are synced"
	I1202 15:29:06.812453       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 15:29:06.900394       1 shared_informer.go:377] "Caches are synced"
	I1202 15:29:06.900417       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1202 15:29:06.900422       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1202 15:29:06.912983       1 shared_informer.go:377] "Caches are synced"
	E1202 15:29:26.926732       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:29:26.931750       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:29:26.935883       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:29:26.937394       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:29:26.941939       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:29:26.946876       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:29:26.946893       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [c61260d97419239c41be460f25d083080904cbb7018b647cf4294cbdcf2470b3] <==
	I1202 15:28:51.276093       1 serving.go:386] Generated self-signed cert in-memory
	I1202 15:28:51.283278       1 controllermanager.go:189] "Starting" version="v1.35.0-beta.0"
	I1202 15:28:51.283306       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 15:28:51.285100       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1202 15:28:51.285154       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1202 15:28:51.285318       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1202 15:28:51.285367       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1202 15:29:01.287242       1 controllermanager.go:250] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-proxy [2b69351c8a77e410a053982397818fcb33f606f013f50c531e760cd0be7136f5] <==
	I1202 15:28:51.085789       1 server_linux.go:53] "Using iptables proxy"
	I1202 15:28:51.158084       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 15:29:06.858497       1 shared_informer.go:377] "Caches are synced"
	I1202 15:29:06.858541       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1202 15:29:06.858650       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 15:29:06.882408       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 15:29:06.882468       1 server_linux.go:136] "Using iptables Proxier"
	I1202 15:29:06.888994       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 15:29:06.889421       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1202 15:29:06.889456       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 15:29:06.891000       1 config.go:200] "Starting service config controller"
	I1202 15:29:06.891018       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 15:29:06.891032       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 15:29:06.891038       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 15:29:06.891032       1 config.go:106] "Starting endpoint slice config controller"
	I1202 15:29:06.891050       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 15:29:06.891089       1 config.go:309] "Starting node config controller"
	I1202 15:29:06.891096       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 15:29:06.991280       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 15:29:06.991321       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1202 15:29:06.991348       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1202 15:29:06.991330       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-proxy [cbf183f96ad458cca4889f2d8498a70c350f894b4a6caf224691fa74849d8862] <==
	I1202 15:28:10.737057       1 server_linux.go:53] "Using iptables proxy"
	I1202 15:28:10.814644       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 15:28:10.915846       1 shared_informer.go:377] "Caches are synced"
	I1202 15:28:10.915896       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1202 15:28:10.916036       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 15:28:10.989739       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 15:28:10.989815       1 server_linux.go:136] "Using iptables Proxier"
	I1202 15:28:10.995599       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 15:28:10.996214       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1202 15:28:10.996241       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 15:28:10.998971       1 config.go:200] "Starting service config controller"
	I1202 15:28:10.999004       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 15:28:10.999026       1 config.go:106] "Starting endpoint slice config controller"
	I1202 15:28:10.999031       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 15:28:10.999046       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 15:28:10.999050       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 15:28:10.999197       1 config.go:309] "Starting node config controller"
	I1202 15:28:10.999231       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 15:28:10.999244       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 15:28:11.099278       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 15:28:11.099306       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1202 15:28:11.099314       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [180ffdb115bcf7d869265aab1df2cb6f33f07745c05d1b65de6c107ce8e2de1a] <==
	E1202 15:28:03.202206       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\""
	E1202 15:28:03.203345       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1202 15:28:03.217452       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope"
	E1202 15:28:03.218466       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1202 15:28:03.227657       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope"
	E1202 15:28:03.228764       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1202 15:28:03.251113       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope"
	E1202 15:28:03.252303       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1202 15:28:03.317017       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope"
	E1202 15:28:03.318198       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1202 15:28:03.352431       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope"
	E1202 15:28:03.353484       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1202 15:28:03.468006       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="namespaces is forbidden: User \"system:kube-scheduler\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope"
	E1202 15:28:03.469092       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1202 15:28:03.481689       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope"
	E1202 15:28:03.482819       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1202 15:28:03.494989       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope"
	E1202 15:28:03.495927       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	I1202 15:28:05.674207       1 shared_informer.go:377] "Caches are synced"
	I1202 15:28:50.457138       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1202 15:28:50.457038       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 15:28:50.457268       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1202 15:28:50.457302       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1202 15:28:50.457310       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1202 15:28:50.457335       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [a119c7112144b4151b97909036d18ec5009dba3db288ef99bc67561e90c3c78a] <==
	I1202 15:28:51.290257       1 serving.go:386] Generated self-signed cert in-memory
	W1202 15:28:51.292769       1 authentication.go:397] Error looking up in-cluster authentication configuration: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.49.2:8441: connect: connection refused
	W1202 15:28:51.292799       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1202 15:28:51.292808       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1202 15:28:51.300270       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1202 15:28:51.300302       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 15:28:51.302116       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 15:28:51.302153       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 15:28:51.302193       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1202 15:28:51.302473       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1202 15:29:11.803121       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 02 15:39:02 functional-748804 kubelet[5467]: E1202 15:39:02.729371    5467 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-qvfrp" podUID="1f9cac4a-1ab2-413d-9967-a023719d8122"
	Dec 02 15:39:03 functional-748804 kubelet[5467]: E1202 15:39:03.728550    5467 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-9f67c86d4-45wbs" podUID="266c903b-b1d9-4f00-bf97-7602875437e3"
	Dec 02 15:39:03 functional-748804 kubelet[5467]: E1202 15:39:03.729042    5467 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-tcjzr" podUID="6fdc785a-6a87-41d8-8b53-eea26c8c69a3"
	Dec 02 15:39:04 functional-748804 kubelet[5467]: E1202 15:39:04.727588    5467 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-8p6fj" containerName="dashboard-metrics-scraper"
	Dec 02 15:39:04 functional-748804 kubelet[5467]: E1202 15:39:04.728858    5467 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-8p6fj" podUID="ad96cdb8-fb98-4e24-9510-f4c48
3deb625"
	Dec 02 15:39:07 functional-748804 kubelet[5467]: E1202 15:39:07.728388    5467 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="5d5bfff0-32f2-46fe-97c6-6e057285303a"
	Dec 02 15:39:10 functional-748804 kubelet[5467]: E1202 15:39:10.728843    5467 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="b290365c-ec25-4d41-9827-3198e9a91a7c"
	Dec 02 15:39:11 functional-748804 kubelet[5467]: E1202 15:39:11.728779    5467 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-748804" containerName="kube-scheduler"
	Dec 02 15:39:15 functional-748804 kubelet[5467]: E1202 15:39:15.729726    5467 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-tcjzr" podUID="6fdc785a-6a87-41d8-8b53-eea26c8c69a3"
	Dec 02 15:39:17 functional-748804 kubelet[5467]: E1202 15:39:17.727911    5467 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-748804" containerName="kube-apiserver"
	Dec 02 15:39:17 functional-748804 kubelet[5467]: E1202 15:39:17.728077    5467 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-qvfrp" containerName="kubernetes-dashboard"
	Dec 02 15:39:17 functional-748804 kubelet[5467]: E1202 15:39:17.728652    5467 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-9f67c86d4-45wbs" podUID="266c903b-b1d9-4f00-bf97-7602875437e3"
	Dec 02 15:39:17 functional-748804 kubelet[5467]: E1202 15:39:17.729344    5467 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-qvfrp" podUID="1f9cac4a-1ab2-413d-9967-a023719d8122"
	Dec 02 15:39:18 functional-748804 kubelet[5467]: E1202 15:39:18.728863    5467 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="5d5bfff0-32f2-46fe-97c6-6e057285303a"
	Dec 02 15:39:19 functional-748804 kubelet[5467]: E1202 15:39:19.728002    5467 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-8p6fj" containerName="dashboard-metrics-scraper"
	Dec 02 15:39:19 functional-748804 kubelet[5467]: E1202 15:39:19.728196    5467 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-748804" containerName="kube-controller-manager"
	Dec 02 15:39:19 functional-748804 kubelet[5467]: E1202 15:39:19.729446    5467 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-8p6fj" podUID="ad96cdb8-fb98-4e24-9510-f4c48
3deb625"
	Dec 02 15:39:24 functional-748804 kubelet[5467]: E1202 15:39:24.729703    5467 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="b290365c-ec25-4d41-9827-3198e9a91a7c"
	Dec 02 15:39:29 functional-748804 kubelet[5467]: E1202 15:39:29.728887    5467 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-tcjzr" podUID="6fdc785a-6a87-41d8-8b53-eea26c8c69a3"
	Dec 02 15:39:30 functional-748804 kubelet[5467]: E1202 15:39:30.727952    5467 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-8p6fj" containerName="dashboard-metrics-scraper"
	Dec 02 15:39:30 functional-748804 kubelet[5467]: E1202 15:39:30.728477    5467 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-9f67c86d4-45wbs" podUID="266c903b-b1d9-4f00-bf97-7602875437e3"
	Dec 02 15:39:30 functional-748804 kubelet[5467]: E1202 15:39:30.728762    5467 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="5d5bfff0-32f2-46fe-97c6-6e057285303a"
	Dec 02 15:39:30 functional-748804 kubelet[5467]: E1202 15:39:30.729274    5467 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-8p6fj" podUID="ad96cdb8-fb98-4e24-9510-f4c48
3deb625"
	Dec 02 15:39:32 functional-748804 kubelet[5467]: E1202 15:39:32.728357    5467 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-qvfrp" containerName="kubernetes-dashboard"
	Dec 02 15:39:32 functional-748804 kubelet[5467]: E1202 15:39:32.729696    5467 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests\\ntoomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-qvfrp" podUID="1f9cac4a-1ab2-413d-9967-a023719d8122"
	
	
	==> storage-provisioner [3e94dbe1375f92abe0a40b96e91788d0ca512d5269e4554d385391ff28bad723] <==
	I1202 15:28:50.986751       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1202 15:28:50.990376       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [8c7bed2b3aa7476230e316a6c487c2dd5357a8d502b2c00552b544a3df23db7a] <==
	W1202 15:39:13.976704       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:39:15.980189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:39:15.984546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:39:17.988009       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:39:17.993734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:39:19.997334       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:39:20.001716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:39:22.004935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:39:22.009243       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:39:24.012182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:39:24.017286       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:39:26.020467       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:39:26.025016       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:39:28.032407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:39:28.036712       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:39:30.040657       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:39:30.046382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:39:32.049806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:39:32.055425       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:39:34.058878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:39:34.062829       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:39:36.066545       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:39:36.070892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:39:38.074207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:39:38.080422       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-748804 -n functional-748804
helpers_test.go:269: (dbg) Run:  kubectl --context functional-748804 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-connect-9f67c86d4-45wbs mysql-844cf969f6-tcjzr nginx-svc sp-pod dashboard-metrics-scraper-5565989548-8p6fj kubernetes-dashboard-b84665fb8-qvfrp
helpers_test.go:282: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-748804 describe pod busybox-mount hello-node-connect-9f67c86d4-45wbs mysql-844cf969f6-tcjzr nginx-svc sp-pod dashboard-metrics-scraper-5565989548-8p6fj kubernetes-dashboard-b84665fb8-qvfrp
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-748804 describe pod busybox-mount hello-node-connect-9f67c86d4-45wbs mysql-844cf969f6-tcjzr nginx-svc sp-pod dashboard-metrics-scraper-5565989548-8p6fj kubernetes-dashboard-b84665fb8-qvfrp: exit status 1 (108.946753ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-748804/192.168.49.2
	Start Time:       Tue, 02 Dec 2025 15:29:25 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  mount-munger:
	    Container ID:  containerd://8e4faaf1af6e0b08715571eee94bbdad1fdaac4808f24be0c5c93182c221596f
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Tue, 02 Dec 2025 15:29:28 +0000
	      Finished:     Tue, 02 Dec 2025 15:29:28 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jvm2g (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-jvm2g:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-748804
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.065s (2.066s including waiting). Image size: 2395207 bytes.
	  Normal  Created    10m   kubelet            Container created
	  Normal  Started    10m   kubelet            Container started
	
	
	Name:             hello-node-connect-9f67c86d4-45wbs
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-748804/192.168.49.2
	Start Time:       Tue, 02 Dec 2025 15:29:37 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=9f67c86d4
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:           10.244.0.11
	Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k7vx5 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-k7vx5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-45wbs to functional-748804
	  Normal   Pulling    6m55s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m52s (x5 over 9m55s)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   6m52s (x5 over 9m55s)   kubelet  Error: ErrImagePull
	  Warning  Failed   4m50s (x20 over 9m55s)  kubelet  Error: ImagePullBackOff
	  Normal   BackOff  4m36s (x21 over 9m55s)  kubelet  Back-off pulling image "kicbase/echo-server"
	
	
	Name:             mysql-844cf969f6-tcjzr
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-748804/192.168.49.2
	Start Time:       Tue, 02 Dec 2025 15:29:36 +0000
	Labels:           app=mysql
	                  pod-template-hash=844cf969f6
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:           10.244.0.10
	Controlled By:  ReplicaSet/mysql-844cf969f6
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-86lp4 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-86lp4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/mysql-844cf969f6-tcjzr to functional-748804
	  Normal   Pulling    6m59s (x5 over 10m)    kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     6m57s (x5 over 9m57s)  kubelet            Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   6m57s (x5 over 9m57s)   kubelet  Error: ErrImagePull
	  Warning  Failed   4m48s (x20 over 9m57s)  kubelet  Error: ImagePullBackOff
	  Normal   BackOff  4m34s (x21 over 9m57s)  kubelet  Back-off pulling image "docker.io/mysql:5.7"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-748804/192.168.49.2
	Start Time:       Tue, 02 Dec 2025 15:29:35 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z6ld8 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-z6ld8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/nginx-svc to functional-748804
	  Normal   Pulling    6m42s (x5 over 10m)    kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     6m40s (x5 over 9m59s)  kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   6m40s (x5 over 9m59s)   kubelet  Error: ErrImagePull
	  Warning  Failed   4m57s (x19 over 9m59s)  kubelet  Error: ImagePullBackOff
	  Normal   BackOff  4m31s (x21 over 9m59s)  kubelet  Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-748804/192.168.49.2
	Start Time:       Tue, 02 Dec 2025 15:29:35 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jzwzd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-jzwzd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/sp-pod to functional-748804
	  Warning  Failed     8m25s (x4 over 10m)  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed  7m2s (x5 over 10m)  kubelet  Error: ErrImagePull
	  Warning  Failed  7m2s                kubelet  Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541: 429 Too Many Requests
	toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed   4m59s (x18 over 10m)  kubelet  Error: ImagePullBackOff
	  Normal   BackOff  4m32s (x20 over 10m)  kubelet  Back-off pulling image "docker.io/nginx"
	  Normal   Pulling  4m21s (x6 over 10m)   kubelet  Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5565989548-8p6fj" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-qvfrp" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-748804 describe pod busybox-mount hello-node-connect-9f67c86d4-45wbs mysql-844cf969f6-tcjzr nginx-svc sp-pod dashboard-metrics-scraper-5565989548-8p6fj kubernetes-dashboard-b84665fb8-qvfrp: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (603.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (240.74s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-748804 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [b290365c-ec25-4d41-9827-3198e9a91a7c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
functional_test_tunnel_test.go:216: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup: pod "run=nginx-svc" failed to start within 4m0s: context deadline exceeded ****
functional_test_tunnel_test.go:216: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-748804 -n functional-748804
functional_test_tunnel_test.go:216: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup: showing logs for failed pods as of 2025-12-02 15:33:36.000177452 +0000 UTC m=+1453.363070954
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-748804 describe po nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) kubectl --context functional-748804 describe po nginx-svc -n default:
Name:             nginx-svc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-748804/192.168.49.2
Start Time:       Tue, 02 Dec 2025 15:29:35 +0000
Labels:           run=nginx-svc
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:  10.244.0.9
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z6ld8 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-z6ld8:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  4m1s                 default-scheduler  Successfully assigned default/nginx-svc to functional-748804
Normal   Pulling    39s (x5 over 4m)     kubelet            Pulling image "docker.io/nginx:alpine"
Warning  Failed     37s (x5 over 3m56s)  kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status from GET request to https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests
toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed   37s (x5 over 3m56s)   kubelet  Error: ErrImagePull
Normal   BackOff  11s (x13 over 3m56s)  kubelet  Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed   11s (x13 over 3m56s)  kubelet  Error: ImagePullBackOff
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-748804 logs nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) Non-zero exit: kubectl --context functional-748804 logs nginx-svc -n default: exit status 1 (70.231937ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx-svc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:216: kubectl --context functional-748804 logs nginx-svc -n default: exit status 1
functional_test_tunnel_test.go:217: wait: run=nginx-svc within 4m0s: context deadline exceeded
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (240.74s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (118.93s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
E1202 15:33:37.543412  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/functional-031973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1202 15:33:46.140412  406799 retry.go:31] will retry after 3.194834435s: Temporary Error: Get "http://10.109.87.195": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I1202 15:33:59.336376  406799 retry.go:31] will retry after 3.401788951s: Temporary Error: Get "http://10.109.87.195": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I1202 15:34:12.740366  406799 retry.go:31] will retry after 5.081581681s: Temporary Error: Get "http://10.109.87.195": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I1202 15:34:27.823485  406799 retry.go:31] will retry after 5.454687656s: Temporary Error: Get "http://10.109.87.195": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I1202 15:34:43.279845  406799 retry.go:31] will retry after 10.11236443s: Temporary Error: Get "http://10.109.87.195": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
E1202 15:34:59.465888  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/functional-031973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1202 15:35:03.393530  406799 retry.go:31] will retry after 21.61472016s: Temporary Error: Get "http://10.109.87.195": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:288: failed to hit nginx at "http://10.109.87.195": Temporary Error: Get "http://10.109.87.195": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-748804 get svc nginx-svc
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
NAME        TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)        AGE
nginx-svc   LoadBalancer   10.108.249.51   10.108.249.51   80:30186/TCP   6m
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (118.93s)

                                                
                                    

Test pass (378/419)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 11.96
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.25
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.16
12 TestDownloadOnly/v1.34.2/json-events 9.34
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.08
18 TestDownloadOnly/v1.34.2/DeleteAll 0.24
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.15
21 TestDownloadOnly/v1.35.0-beta.0/json-events 2.65
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.24
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.15
29 TestDownloadOnlyKic 0.43
30 TestBinaryMirror 0.85
31 TestOffline 48.9
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 118.18
38 TestAddons/serial/Volcano 39.46
40 TestAddons/serial/GCPAuth/Namespaces 0.12
41 TestAddons/serial/GCPAuth/FakeCredentials 10.49
44 TestAddons/parallel/Registry 15.23
45 TestAddons/parallel/RegistryCreds 0.74
46 TestAddons/parallel/Ingress 20.14
47 TestAddons/parallel/InspektorGadget 11.73
48 TestAddons/parallel/MetricsServer 5.84
50 TestAddons/parallel/CSI 53.73
51 TestAddons/parallel/Headlamp 20.67
52 TestAddons/parallel/CloudSpanner 5.53
53 TestAddons/parallel/LocalPath 53.74
54 TestAddons/parallel/NvidiaDevicePlugin 5.51
55 TestAddons/parallel/Yakd 11.75
56 TestAddons/parallel/AmdGpuDevicePlugin 5.53
57 TestAddons/StoppedEnableDisable 12.71
58 TestCertOptions 31.02
59 TestCertExpiration 221.4
61 TestForceSystemdFlag 28.6
62 TestForceSystemdEnv 35.23
63 TestDockerEnvContainerd 37.78
67 TestErrorSpam/setup 22.18
68 TestErrorSpam/start 0.71
69 TestErrorSpam/status 1.02
70 TestErrorSpam/pause 1.54
71 TestErrorSpam/unpause 1.58
72 TestErrorSpam/stop 1.53
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 40.26
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 6.08
79 TestFunctional/serial/KubeContext 0.05
80 TestFunctional/serial/KubectlGetPods 0.06
83 TestFunctional/serial/CacheCmd/cache/add_remote 2.95
84 TestFunctional/serial/CacheCmd/cache/add_local 1.96
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.63
89 TestFunctional/serial/CacheCmd/cache/delete 0.13
90 TestFunctional/serial/MinikubeKubectlCmd 0.13
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
92 TestFunctional/serial/ExtraConfig 41.11
93 TestFunctional/serial/ComponentHealth 0.07
94 TestFunctional/serial/LogsCmd 1.31
95 TestFunctional/serial/LogsFileCmd 1.35
96 TestFunctional/serial/InvalidService 4.71
98 TestFunctional/parallel/ConfigCmd 0.53
100 TestFunctional/parallel/DryRun 0.47
101 TestFunctional/parallel/InternationalLanguage 0.21
102 TestFunctional/parallel/StatusCmd 1.09
106 TestFunctional/parallel/ServiceCmdConnect 10.56
107 TestFunctional/parallel/AddonsCmd 0.19
110 TestFunctional/parallel/SSHCmd 0.72
111 TestFunctional/parallel/CpCmd 1.75
113 TestFunctional/parallel/FileSync 0.29
114 TestFunctional/parallel/CertSync 1.85
118 TestFunctional/parallel/NodeLabels 0.06
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.73
122 TestFunctional/parallel/License 0.4
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.5
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
127 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 12.24
128 TestFunctional/parallel/ServiceCmd/DeployApp 9.18
129 TestFunctional/parallel/ServiceCmd/List 0.52
130 TestFunctional/parallel/ServiceCmd/JSONOutput 0.51
131 TestFunctional/parallel/ServiceCmd/HTTPS 0.39
132 TestFunctional/parallel/ProfileCmd/profile_not_create 0.47
133 TestFunctional/parallel/ServiceCmd/Format 0.39
134 TestFunctional/parallel/ProfileCmd/profile_list 0.45
135 TestFunctional/parallel/ServiceCmd/URL 0.39
136 TestFunctional/parallel/ProfileCmd/profile_json_output 0.47
137 TestFunctional/parallel/MountCmd/any-port 8.17
138 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
139 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
144 TestFunctional/parallel/UpdateContextCmd/no_changes 0.16
145 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.16
146 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.16
147 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
148 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
149 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
150 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
151 TestFunctional/parallel/ImageCommands/ImageBuild 3.9
152 TestFunctional/parallel/ImageCommands/Setup 1.75
153 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.07
154 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.1
155 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.88
156 TestFunctional/parallel/MountCmd/specific-port 1.73
157 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.45
158 TestFunctional/parallel/MountCmd/VerifyCleanup 1.7
159 TestFunctional/parallel/ImageCommands/ImageRemove 0.51
160 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.67
161 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.42
162 TestFunctional/parallel/Version/short 0.07
163 TestFunctional/parallel/Version/components 0.5
164 TestFunctional/delete_echo-server_images 0.04
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 43.17
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 6.32
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.05
175 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 0.06
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 2.76
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 1.92
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.07
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.07
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.31
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.67
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.14
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 0.13
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 0.13
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 34.89
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 0.07
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 1.37
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.43
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 4.23
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.54
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.46
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.22
197 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 1.16
202 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.19
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.66
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 2.06
208 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.32
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 2.03
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 0.07
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.62
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.45
218 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 9.19
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.57
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 8.17
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.5
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.47
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.08
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.57
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.25
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.24
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.26
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.26
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 3.47
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.84
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.19
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 1.11
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.97
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.86
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.37
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.56
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.54
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.57
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.74
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.51
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.54
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 2.29
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.46
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.42
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.51
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.17
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.16
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.17
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.11
258 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
259 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
260 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
264 TestMultiControlPlane/serial/StartCluster 158.7
265 TestMultiControlPlane/serial/DeployApp 5.91
266 TestMultiControlPlane/serial/PingHostFromPods 1.26
267 TestMultiControlPlane/serial/AddWorkerNode 24.36
268 TestMultiControlPlane/serial/NodeLabels 0.07
269 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.95
270 TestMultiControlPlane/serial/CopyFile 18.52
271 TestMultiControlPlane/serial/StopSecondaryNode 12.83
272 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.75
273 TestMultiControlPlane/serial/RestartSecondaryNode 8.99
274 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.97
275 TestMultiControlPlane/serial/RestartClusterKeepsNodes 95.88
276 TestMultiControlPlane/serial/DeleteSecondaryNode 9.54
277 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.73
278 TestMultiControlPlane/serial/StopCluster 36.27
279 TestMultiControlPlane/serial/RestartCluster 52.69
280 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.74
281 TestMultiControlPlane/serial/AddSecondaryNode 36.71
282 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.95
287 TestJSONOutput/start/Command 40.14
288 TestJSONOutput/start/Audit 0
290 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
291 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
293 TestJSONOutput/pause/Command 0.74
294 TestJSONOutput/pause/Audit 0
296 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
297 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
299 TestJSONOutput/unpause/Command 0.63
300 TestJSONOutput/unpause/Audit 0
302 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
303 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
305 TestJSONOutput/stop/Command 5.9
306 TestJSONOutput/stop/Audit 0
308 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
309 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
310 TestErrorJSONOutput 0.25
312 TestKicCustomNetwork/create_custom_network 33.5
313 TestKicCustomNetwork/use_default_bridge_network 21.63
314 TestKicExistingNetwork 26.44
315 TestKicCustomSubnet 23.95
316 TestKicStaticIP 26.28
317 TestMainNoArgs 0.07
318 TestMinikubeProfile 48.06
321 TestMountStart/serial/StartWithMountFirst 7.48
322 TestMountStart/serial/VerifyMountFirst 0.29
323 TestMountStart/serial/StartWithMountSecond 7.62
324 TestMountStart/serial/VerifyMountSecond 0.29
325 TestMountStart/serial/DeleteFirst 1.72
326 TestMountStart/serial/VerifyMountPostDelete 0.29
327 TestMountStart/serial/Stop 1.28
328 TestMountStart/serial/RestartStopped 7.9
329 TestMountStart/serial/VerifyMountPostStop 0.3
332 TestMultiNode/serial/FreshStart2Nodes 65.14
333 TestMultiNode/serial/DeployApp2Nodes 4.76
334 TestMultiNode/serial/PingHostFrom2Pods 0.87
335 TestMultiNode/serial/AddNode 23.08
336 TestMultiNode/serial/MultiNodeLabels 0.06
337 TestMultiNode/serial/ProfileList 0.7
338 TestMultiNode/serial/CopyFile 10.5
339 TestMultiNode/serial/StopNode 2.33
340 TestMultiNode/serial/StartAfterStop 7.02
341 TestMultiNode/serial/RestartKeepsNodes 75.84
342 TestMultiNode/serial/DeleteNode 5.34
343 TestMultiNode/serial/StopMultiNode 24.11
344 TestMultiNode/serial/RestartMultiNode 45.46
345 TestMultiNode/serial/ValidateNameConflict 26.22
350 TestPreload 112.36
352 TestScheduledStopUnix 99.6
355 TestInsufficientStorage 11.86
356 TestRunningBinaryUpgrade 47.66
358 TestKubernetesUpgrade 322.19
359 TestMissingContainerUpgrade 83.75
361 TestPause/serial/Start 50.08
363 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
364 TestNoKubernetes/serial/StartWithK8s 26.34
365 TestPause/serial/SecondStartNoReconfiguration 6.68
366 TestPause/serial/Pause 0.67
367 TestPause/serial/VerifyStatus 0.36
368 TestPause/serial/Unpause 0.73
369 TestPause/serial/PauseAgain 0.84
370 TestPause/serial/DeletePaused 2.95
371 TestPause/serial/VerifyDeletedResources 0.68
372 TestStoppedBinaryUpgrade/Setup 3.31
373 TestStoppedBinaryUpgrade/Upgrade 319.17
374 TestNoKubernetes/serial/StartWithStopK8s 8.76
375 TestNoKubernetes/serial/Start 9.71
376 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
377 TestNoKubernetes/serial/VerifyK8sNotRunning 0.35
378 TestNoKubernetes/serial/ProfileList 1.28
379 TestNoKubernetes/serial/Stop 2.12
380 TestNoKubernetes/serial/StartNoArgs 7.53
381 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.3
389 TestNetworkPlugins/group/false 3.79
400 TestNetworkPlugins/group/auto/Start 42.48
401 TestNetworkPlugins/group/kindnet/Start 72.22
402 TestNetworkPlugins/group/auto/KubeletFlags 0.3
403 TestNetworkPlugins/group/auto/NetCatPod 8.19
404 TestNetworkPlugins/group/auto/DNS 0.15
405 TestNetworkPlugins/group/auto/Localhost 0.12
406 TestNetworkPlugins/group/auto/HairPin 0.11
407 TestNetworkPlugins/group/calico/Start 53.78
408 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
409 TestNetworkPlugins/group/kindnet/KubeletFlags 0.31
410 TestNetworkPlugins/group/kindnet/NetCatPod 9.19
411 TestNetworkPlugins/group/kindnet/DNS 0.16
412 TestNetworkPlugins/group/kindnet/Localhost 0.12
413 TestNetworkPlugins/group/kindnet/HairPin 0.14
414 TestNetworkPlugins/group/custom-flannel/Start 51.05
415 TestNetworkPlugins/group/calico/ControllerPod 6.01
416 TestNetworkPlugins/group/calico/KubeletFlags 0.32
417 TestNetworkPlugins/group/calico/NetCatPod 9.2
418 TestNetworkPlugins/group/calico/DNS 0.13
419 TestNetworkPlugins/group/calico/Localhost 0.12
420 TestNetworkPlugins/group/calico/HairPin 0.15
421 TestNetworkPlugins/group/enable-default-cni/Start 59.54
422 TestStoppedBinaryUpgrade/MinikubeLogs 1.61
423 TestNetworkPlugins/group/flannel/Start 52.94
424 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
425 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.2
426 TestNetworkPlugins/group/custom-flannel/DNS 0.15
427 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
428 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
429 TestNetworkPlugins/group/bridge/Start 66.01
431 TestStartStop/group/old-k8s-version/serial/FirstStart 53.4
432 TestNetworkPlugins/group/flannel/ControllerPod 6.01
433 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.34
434 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.2
435 TestNetworkPlugins/group/flannel/KubeletFlags 0.4
436 TestNetworkPlugins/group/flannel/NetCatPod 8.31
437 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
438 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
439 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
440 TestNetworkPlugins/group/flannel/DNS 0.16
441 TestNetworkPlugins/group/flannel/Localhost 0.13
442 TestNetworkPlugins/group/flannel/HairPin 0.13
444 TestStartStop/group/no-preload/serial/FirstStart 51.03
446 TestStartStop/group/embed-certs/serial/FirstStart 42.72
447 TestStartStop/group/old-k8s-version/serial/DeployApp 12.34
448 TestNetworkPlugins/group/bridge/KubeletFlags 0.34
449 TestNetworkPlugins/group/bridge/NetCatPod 10.25
450 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.06
451 TestStartStop/group/old-k8s-version/serial/Stop 12.5
452 TestNetworkPlugins/group/bridge/DNS 0.14
453 TestNetworkPlugins/group/bridge/Localhost 0.12
454 TestNetworkPlugins/group/bridge/HairPin 0.13
455 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.26
456 TestStartStop/group/old-k8s-version/serial/SecondStart 47.69
457 TestStartStop/group/embed-certs/serial/DeployApp 8.29
458 TestStartStop/group/no-preload/serial/DeployApp 9.26
460 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 39.06
461 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.43
462 TestStartStop/group/embed-certs/serial/Stop 12.14
463 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.14
464 TestStartStop/group/no-preload/serial/Stop 12.23
465 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.26
466 TestStartStop/group/embed-certs/serial/SecondStart 49.95
467 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
468 TestStartStop/group/no-preload/serial/SecondStart 45.24
469 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
470 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
471 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.28
472 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
473 TestStartStop/group/old-k8s-version/serial/Pause 2.94
475 TestStartStop/group/newest-cni/serial/FirstStart 34.08
476 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.97
477 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.71
478 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.26
479 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 47.57
480 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
481 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
482 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
483 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.1
484 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
485 TestStartStop/group/no-preload/serial/Pause 3.59
486 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.4
487 TestStartStop/group/embed-certs/serial/Pause 3.85
488 TestStartStop/group/newest-cni/serial/DeployApp 0
489 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.62
490 TestStartStop/group/newest-cni/serial/Stop 2.2
491 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
492 TestStartStop/group/newest-cni/serial/SecondStart 10.58
493 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
494 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
495 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
496 TestStartStop/group/newest-cni/serial/Pause 2.76
497 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
498 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
499 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
500 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.9
x
+
TestDownloadOnly/v1.28.0/json-events (11.96s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-449145 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-449145 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (11.954537999s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (11.96s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1202 15:09:34.633644  406799 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1202 15:09:34.633755  406799 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22021-403182/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-449145
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-449145: exit status 85 (80.608184ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-449145 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-449145 │ jenkins │ v1.37.0 │ 02 Dec 25 15:09 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 15:09:22
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 15:09:22.734814  406811 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:09:22.735105  406811 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:09:22.735114  406811 out.go:374] Setting ErrFile to fd 2...
	I1202 15:09:22.735119  406811 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:09:22.735345  406811 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-403182/.minikube/bin
	W1202 15:09:22.735487  406811 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22021-403182/.minikube/config/config.json: open /home/jenkins/minikube-integration/22021-403182/.minikube/config/config.json: no such file or directory
	I1202 15:09:22.736037  406811 out.go:368] Setting JSON to true
	I1202 15:09:22.737017  406811 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":6705,"bootTime":1764681458,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 15:09:22.737083  406811 start.go:143] virtualization: kvm guest
	I1202 15:09:22.741390  406811 out.go:99] [download-only-449145] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1202 15:09:22.741604  406811 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22021-403182/.minikube/cache/preloaded-tarball: no such file or directory
	I1202 15:09:22.741817  406811 notify.go:221] Checking for updates...
	I1202 15:09:22.743755  406811 out.go:171] MINIKUBE_LOCATION=22021
	I1202 15:09:22.745216  406811 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 15:09:22.746847  406811 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22021-403182/kubeconfig
	I1202 15:09:22.748290  406811 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-403182/.minikube
	I1202 15:09:22.749618  406811 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1202 15:09:22.752020  406811 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1202 15:09:22.752305  406811 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 15:09:22.777496  406811 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 15:09:22.777595  406811 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 15:09:22.838497  406811 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-02 15:09:22.828677708 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 15:09:22.838653  406811 docker.go:319] overlay module found
	I1202 15:09:22.840578  406811 out.go:99] Using the docker driver based on user configuration
	I1202 15:09:22.840614  406811 start.go:309] selected driver: docker
	I1202 15:09:22.840623  406811 start.go:927] validating driver "docker" against <nil>
	I1202 15:09:22.840749  406811 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 15:09:22.899420  406811 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-02 15:09:22.889078809 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 15:09:22.899630  406811 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1202 15:09:22.900167  406811 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1202 15:09:22.900334  406811 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1202 15:09:22.902039  406811 out.go:171] Using Docker driver with root privileges
	I1202 15:09:22.903177  406811 cni.go:84] Creating CNI manager for ""
	I1202 15:09:22.903257  406811 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 15:09:22.903272  406811 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1202 15:09:22.903363  406811 start.go:353] cluster config:
	{Name:download-only-449145 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-449145 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 15:09:22.904647  406811 out.go:99] Starting "download-only-449145" primary control-plane node in "download-only-449145" cluster
	I1202 15:09:22.904689  406811 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1202 15:09:22.906044  406811 out.go:99] Pulling base image v0.0.48-1764169655-21974 ...
	I1202 15:09:22.906087  406811 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1202 15:09:22.906188  406811 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 15:09:22.924592  406811 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b to local cache
	I1202 15:09:22.924859  406811 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local cache directory
	I1202 15:09:22.924970  406811 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b to local cache
	I1202 15:09:23.001184  406811 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I1202 15:09:23.001233  406811 cache.go:65] Caching tarball of preloaded images
	I1202 15:09:23.001472  406811 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1202 15:09:23.003605  406811 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1202 15:09:23.003654  406811 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4 from gcs api...
	I1202 15:09:23.107517  406811 preload.go:295] Got checksum from GCS API "2746dfda401436a5341e0500068bf339"
	I1202 15:09:23.107694  406811 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:2746dfda401436a5341e0500068bf339 -> /home/jenkins/minikube-integration/22021-403182/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I1202 15:09:27.685220  406811 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b as a tarball
	
	
	* The control-plane node download-only-449145 host does not exist
	  To start a cluster, run: "minikube start -p download-only-449145"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-449145
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (9.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-087336 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-087336 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (9.343984985s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (9.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1202 15:09:44.465650  406799 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
I1202 15:09:44.465716  406799 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22021-403182/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-087336
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-087336: exit status 85 (79.163124ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-449145 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-449145 │ jenkins │ v1.37.0 │ 02 Dec 25 15:09 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 02 Dec 25 15:09 UTC │ 02 Dec 25 15:09 UTC │
	│ delete  │ -p download-only-449145                                                                                                                                                               │ download-only-449145 │ jenkins │ v1.37.0 │ 02 Dec 25 15:09 UTC │ 02 Dec 25 15:09 UTC │
	│ start   │ -o=json --download-only -p download-only-087336 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-087336 │ jenkins │ v1.37.0 │ 02 Dec 25 15:09 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 15:09:35
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 15:09:35.176980  407175 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:09:35.177282  407175 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:09:35.177293  407175 out.go:374] Setting ErrFile to fd 2...
	I1202 15:09:35.177297  407175 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:09:35.177522  407175 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-403182/.minikube/bin
	I1202 15:09:35.178038  407175 out.go:368] Setting JSON to true
	I1202 15:09:35.178998  407175 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":6717,"bootTime":1764681458,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 15:09:35.179061  407175 start.go:143] virtualization: kvm guest
	I1202 15:09:35.181132  407175 out.go:99] [download-only-087336] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 15:09:35.181377  407175 notify.go:221] Checking for updates...
	I1202 15:09:35.182805  407175 out.go:171] MINIKUBE_LOCATION=22021
	I1202 15:09:35.184199  407175 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 15:09:35.185412  407175 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22021-403182/kubeconfig
	I1202 15:09:35.186948  407175 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-403182/.minikube
	I1202 15:09:35.188361  407175 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1202 15:09:35.191020  407175 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1202 15:09:35.191316  407175 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 15:09:35.218989  407175 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 15:09:35.219086  407175 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 15:09:35.277161  407175 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-02 15:09:35.267727961 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 15:09:35.277264  407175 docker.go:319] overlay module found
	I1202 15:09:35.278888  407175 out.go:99] Using the docker driver based on user configuration
	I1202 15:09:35.278934  407175 start.go:309] selected driver: docker
	I1202 15:09:35.278944  407175 start.go:927] validating driver "docker" against <nil>
	I1202 15:09:35.279040  407175 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 15:09:35.340315  407175 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-02 15:09:35.330424498 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 15:09:35.340466  407175 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1202 15:09:35.340969  407175 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1202 15:09:35.341116  407175 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1202 15:09:35.343195  407175 out.go:171] Using Docker driver with root privileges
	I1202 15:09:35.344479  407175 cni.go:84] Creating CNI manager for ""
	I1202 15:09:35.344542  407175 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 15:09:35.344552  407175 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1202 15:09:35.344617  407175 start.go:353] cluster config:
	{Name:download-only-087336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:download-only-087336 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 15:09:35.346958  407175 out.go:99] Starting "download-only-087336" primary control-plane node in "download-only-087336" cluster
	I1202 15:09:35.346976  407175 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1202 15:09:35.348473  407175 out.go:99] Pulling base image v0.0.48-1764169655-21974 ...
	I1202 15:09:35.348504  407175 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1202 15:09:35.348619  407175 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 15:09:35.365489  407175 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b to local cache
	I1202 15:09:35.365689  407175 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local cache directory
	I1202 15:09:35.365720  407175 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local cache directory, skipping pull
	I1202 15:09:35.365727  407175 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in cache, skipping pull
	I1202 15:09:35.365742  407175 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b as a tarball
	I1202 15:09:35.521615  407175 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4
	I1202 15:09:35.521658  407175 cache.go:65] Caching tarball of preloaded images
	I1202 15:09:35.521893  407175 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1202 15:09:35.523799  407175 out.go:99] Downloading Kubernetes v1.34.2 preload ...
	I1202 15:09:35.523830  407175 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4 from gcs api...
	I1202 15:09:35.622892  407175 preload.go:295] Got checksum from GCS API "9dc714afc7e85c27d8bb9ef4a563e9e2"
	I1202 15:09:35.622953  407175 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4?checksum=md5:9dc714afc7e85c27d8bb9ef4a563e9e2 -> /home/jenkins/minikube-integration/22021-403182/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-087336 host does not exist
	  To start a cluster, run: "minikube start -p download-only-087336"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-087336
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (2.65s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-139678 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-139678 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (2.654103035s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (2.65s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
--- PASS: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
--- PASS: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-139678
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-139678: exit status 85 (76.623572ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                             ARGS                                                                                             │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-449145 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd        │ download-only-449145 │ jenkins │ v1.37.0 │ 02 Dec 25 15:09 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                        │ minikube             │ jenkins │ v1.37.0 │ 02 Dec 25 15:09 UTC │ 02 Dec 25 15:09 UTC │
	│ delete  │ -p download-only-449145                                                                                                                                                                      │ download-only-449145 │ jenkins │ v1.37.0 │ 02 Dec 25 15:09 UTC │ 02 Dec 25 15:09 UTC │
	│ start   │ -o=json --download-only -p download-only-087336 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd        │ download-only-087336 │ jenkins │ v1.37.0 │ 02 Dec 25 15:09 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                        │ minikube             │ jenkins │ v1.37.0 │ 02 Dec 25 15:09 UTC │ 02 Dec 25 15:09 UTC │
	│ delete  │ -p download-only-087336                                                                                                                                                                      │ download-only-087336 │ jenkins │ v1.37.0 │ 02 Dec 25 15:09 UTC │ 02 Dec 25 15:09 UTC │
	│ start   │ -o=json --download-only -p download-only-139678 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-139678 │ jenkins │ v1.37.0 │ 02 Dec 25 15:09 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 15:09:44
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 15:09:44.994944  407545 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:09:44.995073  407545 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:09:44.995083  407545 out.go:374] Setting ErrFile to fd 2...
	I1202 15:09:44.995089  407545 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:09:44.995356  407545 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-403182/.minikube/bin
	I1202 15:09:44.995901  407545 out.go:368] Setting JSON to true
	I1202 15:09:44.996857  407545 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":6727,"bootTime":1764681458,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 15:09:44.996925  407545 start.go:143] virtualization: kvm guest
	I1202 15:09:44.999075  407545 out.go:99] [download-only-139678] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 15:09:44.999287  407545 notify.go:221] Checking for updates...
	I1202 15:09:45.000516  407545 out.go:171] MINIKUBE_LOCATION=22021
	I1202 15:09:45.001998  407545 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 15:09:45.003380  407545 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22021-403182/kubeconfig
	I1202 15:09:45.008623  407545 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-403182/.minikube
	I1202 15:09:45.010175  407545 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1202 15:09:45.012753  407545 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1202 15:09:45.013089  407545 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 15:09:45.037399  407545 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 15:09:45.037523  407545 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 15:09:45.096165  407545 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-02 15:09:45.086532335 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 15:09:45.096270  407545 docker.go:319] overlay module found
	I1202 15:09:45.097843  407545 out.go:99] Using the docker driver based on user configuration
	I1202 15:09:45.097885  407545 start.go:309] selected driver: docker
	I1202 15:09:45.097892  407545 start.go:927] validating driver "docker" against <nil>
	I1202 15:09:45.097989  407545 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 15:09:45.159452  407545 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-02 15:09:45.149594605 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 15:09:45.159626  407545 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1202 15:09:45.160293  407545 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1202 15:09:45.160440  407545 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1202 15:09:45.162331  407545 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-139678 host does not exist
	  To start a cluster, run: "minikube start -p download-only-139678"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-139678
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.43s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-253836 --alsologtostderr --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "download-docker-253836" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-253836
--- PASS: TestDownloadOnlyKic (0.43s)

                                                
                                    
x
+
TestBinaryMirror (0.85s)

                                                
                                                
=== RUN   TestBinaryMirror
I1202 15:09:49.105956  406799 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-441155 --alsologtostderr --binary-mirror http://127.0.0.1:42515 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-441155" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-441155
--- PASS: TestBinaryMirror (0.85s)

                                                
                                    
x
+
TestOffline (48.9s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-943264 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-943264 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (46.304590532s)
helpers_test.go:175: Cleaning up "offline-containerd-943264" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-943264
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-943264: (2.598897795s)
--- PASS: TestOffline (48.90s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-371602
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-371602: exit status 85 (69.225006ms)

                                                
                                                
-- stdout --
	* Profile "addons-371602" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-371602"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-371602
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-371602: exit status 85 (68.651809ms)

                                                
                                                
-- stdout --
	* Profile "addons-371602" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-371602"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (118.18s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-371602 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-371602 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (1m58.182392216s)
--- PASS: TestAddons/Setup (118.18s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.46s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:884: volcano-controller stabilized in 17.94418ms
addons_test.go:876: volcano-admission stabilized in 18.001934ms
addons_test.go:868: volcano-scheduler stabilized in 18.074582ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-mvqcg" [1a4f2c8e-bf38-4766-8bcf-068572ab0576] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003269276s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-6c447bd768-pz8tr" [5a5deca3-fe00-43da-8f14-3429d46cc2f8] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003833029s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-6fd4f85cb8-cmglm" [381cb466-59e9-45bd-a603-fb98d237d74d] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004333647s
addons_test.go:903: (dbg) Run:  kubectl --context addons-371602 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-371602 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-371602 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [12a67350-25ba-4005-86cc-76107c13d9be] Pending
helpers_test.go:352: "test-job-nginx-0" [12a67350-25ba-4005-86cc-76107c13d9be] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [12a67350-25ba-4005-86cc-76107c13d9be] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.004400951s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-371602 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-371602 addons disable volcano --alsologtostderr -v=1: (12.105973377s)
--- PASS: TestAddons/serial/Volcano (39.46s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-371602 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-371602 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.49s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-371602 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-371602 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [5b7ee887-b370-4886-80f4-b798e8fc57d6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [5b7ee887-b370-4886-80f4-b798e8fc57d6] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.00443334s
addons_test.go:694: (dbg) Run:  kubectl --context addons-371602 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-371602 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-371602 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.49s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.23s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 7.869974ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-kr4z4" [785523a6-e9e2-4e86-860a-e6931e38212e] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004325264s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-2tmnb" [2cada359-2aa9-417f-ae46-f09c6d1d46f7] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004535009s
addons_test.go:392: (dbg) Run:  kubectl --context addons-371602 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-371602 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-371602 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.334286251s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-371602 ip
2025/12/02 15:13:01 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-371602 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.23s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.74s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 4.210855ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-371602
addons_test.go:332: (dbg) Run:  kubectl --context addons-371602 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-371602 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.74s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.14s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-371602 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-371602 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-371602 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [c7ad5762-f2ea-4c17-8239-39d609da422a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [c7ad5762-f2ea-4c17-8239-39d609da422a] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004382932s
I1202 15:12:57.755776  406799 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-371602 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-371602 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-371602 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-371602 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-371602 addons disable ingress-dns --alsologtostderr -v=1: (1.063872379s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-371602 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-371602 addons disable ingress --alsologtostderr -v=1: (7.831722358s)
--- PASS: TestAddons/parallel/Ingress (20.14s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.73s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-nwk9w" [482c37d1-bc3d-4622-9042-3a62b461161e] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003901163s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-371602 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-371602 addons disable inspektor-gadget --alsologtostderr -v=1: (5.72610095s)
--- PASS: TestAddons/parallel/InspektorGadget (11.73s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.84s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 6.871105ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-mt8lc" [4dc86f08-01fe-4161-abc5-750318ca3465] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004269772s
addons_test.go:463: (dbg) Run:  kubectl --context addons-371602 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-371602 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.84s)

                                                
                                    
x
+
TestAddons/parallel/CSI (53.73s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1202 15:13:07.948825  406799 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1202 15:13:07.952764  406799 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1202 15:13:07.952788  406799 kapi.go:107] duration metric: took 3.967927ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.978043ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-371602 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-371602 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-371602 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-371602 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-371602 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-371602 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-371602 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-371602 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-371602 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-371602 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-371602 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-371602 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-371602 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-371602 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-371602 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-371602 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-371602 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-371602 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-371602 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-371602 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-371602 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [7ede3ae4-d574-4071-91d3-a324debf13c4] Pending
helpers_test.go:352: "task-pv-pod" [7ede3ae4-d574-4071-91d3-a324debf13c4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [7ede3ae4-d574-4071-91d3-a324debf13c4] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.004182175s
addons_test.go:572: (dbg) Run:  kubectl --context addons-371602 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-371602 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-371602 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-371602 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-371602 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-371602 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-371602 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-371602 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-371602 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-371602 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-371602 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-371602 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-371602 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-371602 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-371602 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-371602 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-371602 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [4f93c23d-7025-4a33-8e5f-4f405bba5d85] Pending
helpers_test.go:352: "task-pv-pod-restore" [4f93c23d-7025-4a33-8e5f-4f405bba5d85] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [4f93c23d-7025-4a33-8e5f-4f405bba5d85] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003162518s
addons_test.go:614: (dbg) Run:  kubectl --context addons-371602 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-371602 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-371602 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-371602 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-371602 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-371602 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.58846591s)
--- PASS: TestAddons/parallel/CSI (53.73s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (20.67s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-371602 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-dfcdc64b-zd4f9" [7f427105-7eae-4a1b-8778-7666d40c929b] Pending
helpers_test.go:352: "headlamp-dfcdc64b-zd4f9" [7f427105-7eae-4a1b-8778-7666d40c929b] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-dfcdc64b-zd4f9" [7f427105-7eae-4a1b-8778-7666d40c929b] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.00458524s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-371602 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-371602 addons disable headlamp --alsologtostderr -v=1: (5.797118679s)
--- PASS: TestAddons/parallel/Headlamp (20.67s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.53s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-bfxx7" [37e78adb-6868-45a4-9d49-7bccb37030e8] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003770011s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-371602 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.53s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.74s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-371602 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-371602 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-371602 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-371602 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-371602 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-371602 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-371602 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-371602 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-371602 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [8ec0fe30-9796-46d2-b1f0-72ae99ba5a03] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [8ec0fe30-9796-46d2-b1f0-72ae99ba5a03] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [8ec0fe30-9796-46d2-b1f0-72ae99ba5a03] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003142588s
addons_test.go:967: (dbg) Run:  kubectl --context addons-371602 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-371602 ssh "cat /opt/local-path-provisioner/pvc-eadfbbc7-90e0-4fb2-be5e-3be426b3357d_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-371602 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-371602 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-371602 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-371602 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.803322377s)
--- PASS: TestAddons/parallel/LocalPath (53.74s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.51s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-6t6l4" [5a73f593-3765-47d7-bcc1-106a4897c5e6] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004294258s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-371602 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.51s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.75s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-5s59n" [f0aa3ffc-2004-40b3-81ee-7d7635d6c042] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.002965541s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-371602 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-371602 addons disable yakd --alsologtostderr -v=1: (5.746455829s)
--- PASS: TestAddons/parallel/Yakd (11.75s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.53s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-xfgm4" [4fe6ebba-9487-4957-ad3e-dde5363111f7] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003806678s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-371602 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (5.53s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.71s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-371602
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-371602: (12.401439173s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-371602
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-371602
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-371602
--- PASS: TestAddons/StoppedEnableDisable (12.71s)

                                                
                                    
x
+
TestCertOptions (31.02s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-160075 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-160075 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (27.359799866s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-160075 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-160075 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-160075 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-160075" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-160075
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-160075: (2.831507711s)
--- PASS: TestCertOptions (31.02s)

                                                
                                    
x
+
TestCertExpiration (221.4s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-016294 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
E1202 16:00:46.417007  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/functional-748804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-016294 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (33.357379654s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-016294 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-016294 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (5.4931624s)
helpers_test.go:175: Cleaning up "cert-expiration-016294" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-016294
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-016294: (2.548599927s)
--- PASS: TestCertExpiration (221.40s)

                                                
                                    
x
+
TestForceSystemdFlag (28.6s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-677210 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-677210 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (26.103782089s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-677210 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-677210" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-677210
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-677210: (2.172281167s)
--- PASS: TestForceSystemdFlag (28.60s)

                                                
                                    
x
+
TestForceSystemdEnv (35.23s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-999583 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-999583 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (32.133515536s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-999583 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-999583" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-999583
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-999583: (2.611882501s)
--- PASS: TestForceSystemdEnv (35.23s)

                                                
                                    
x
+
TestDockerEnvContainerd (37.78s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux amd64
docker_test.go:181: (dbg) Run:  out/minikube-linux-amd64 start -p dockerenv-658235 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-amd64 start -p dockerenv-658235 --driver=docker  --container-runtime=containerd: (21.823673491s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-amd64 docker-env --ssh-host --ssh-add -p dockerenv-658235"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-amd64 docker-env --ssh-host --ssh-add -p dockerenv-658235": (1.027980014s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXX47SiWz/agent.430864" SSH_AGENT_PID="430865" DOCKER_HOST=ssh://docker@127.0.0.1:33155 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXX47SiWz/agent.430864" SSH_AGENT_PID="430865" DOCKER_HOST=ssh://docker@127.0.0.1:33155 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXX47SiWz/agent.430864" SSH_AGENT_PID="430865" DOCKER_HOST=ssh://docker@127.0.0.1:33155 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.91958171s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXX47SiWz/agent.430864" SSH_AGENT_PID="430865" DOCKER_HOST=ssh://docker@127.0.0.1:33155 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-658235" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p dockerenv-658235
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p dockerenv-658235: (1.986376478s)
--- PASS: TestDockerEnvContainerd (37.78s)

                                                
                                    
x
+
TestErrorSpam/setup (22.18s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-930452 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-930452 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-930452 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-930452 --driver=docker  --container-runtime=containerd: (22.18372846s)
--- PASS: TestErrorSpam/setup (22.18s)

                                                
                                    
x
+
TestErrorSpam/start (0.71s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-930452 --log_dir /tmp/nospam-930452 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-930452 --log_dir /tmp/nospam-930452 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-930452 --log_dir /tmp/nospam-930452 start --dry-run
--- PASS: TestErrorSpam/start (0.71s)

                                                
                                    
x
+
TestErrorSpam/status (1.02s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-930452 --log_dir /tmp/nospam-930452 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-930452 --log_dir /tmp/nospam-930452 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-930452 --log_dir /tmp/nospam-930452 status
--- PASS: TestErrorSpam/status (1.02s)

                                                
                                    
x
+
TestErrorSpam/pause (1.54s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-930452 --log_dir /tmp/nospam-930452 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-930452 --log_dir /tmp/nospam-930452 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-930452 --log_dir /tmp/nospam-930452 pause
--- PASS: TestErrorSpam/pause (1.54s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.58s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-930452 --log_dir /tmp/nospam-930452 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-930452 --log_dir /tmp/nospam-930452 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-930452 --log_dir /tmp/nospam-930452 unpause
--- PASS: TestErrorSpam/unpause (1.58s)

                                                
                                    
x
+
TestErrorSpam/stop (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-930452 --log_dir /tmp/nospam-930452 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-930452 --log_dir /tmp/nospam-930452 stop: (1.303662613s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-930452 --log_dir /tmp/nospam-930452 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-930452 --log_dir /tmp/nospam-930452 stop
--- PASS: TestErrorSpam/stop (1.53s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22021-403182/.minikube/files/etc/test/nested/copy/406799/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (40.26s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-031973 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-031973 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (40.254708311s)
--- PASS: TestFunctional/serial/StartWithProxy (40.26s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.08s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1202 15:16:12.986147  406799 config.go:182] Loaded profile config "functional-031973": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-031973 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-031973 --alsologtostderr -v=8: (6.074751894s)
functional_test.go:678: soft start took 6.075567281s for "functional-031973" cluster.
I1202 15:16:19.061292  406799 config.go:182] Loaded profile config "functional-031973": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (6.08s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-031973 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.95s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-031973 cache add registry.k8s.io/pause:3.3: (1.045120564s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.95s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.96s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-031973 /tmp/TestFunctionalserialCacheCmdcacheadd_local2909877323/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 cache add minikube-local-cache-test:functional-031973
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-031973 cache add minikube-local-cache-test:functional-031973: (1.571493339s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 cache delete minikube-local-cache-test:functional-031973
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-031973
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.96s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.63s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-031973 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (299.278144ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.63s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 kubectl -- --context functional-031973 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-031973 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-031973 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1202 15:16:48.219595  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/addons-371602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:16:48.225987  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/addons-371602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:16:48.237457  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/addons-371602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:16:48.258990  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/addons-371602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:16:48.300548  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/addons-371602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:16:48.381990  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/addons-371602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:16:48.543599  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/addons-371602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:16:48.865292  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/addons-371602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:16:49.507415  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/addons-371602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:16:50.789062  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/addons-371602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:16:53.351020  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/addons-371602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:16:58.472596  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/addons-371602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-031973 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.110063819s)
functional_test.go:776: restart took 41.110200974s for "functional-031973" cluster.
I1202 15:17:07.638820  406799 config.go:182] Loaded profile config "functional-031973": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (41.11s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-031973 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 logs
E1202 15:17:08.714451  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/addons-371602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-031973 logs: (1.310535584s)
--- PASS: TestFunctional/serial/LogsCmd (1.31s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.35s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 logs --file /tmp/TestFunctionalserialLogsFileCmd2257688309/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-031973 logs --file /tmp/TestFunctionalserialLogsFileCmd2257688309/001/logs.txt: (1.346132774s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.35s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.71s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-031973 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-031973
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-031973: exit status 115 (368.568152ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30262 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-031973 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-031973 delete -f testdata/invalidsvc.yaml: (1.166626095s)
--- PASS: TestFunctional/serial/InvalidService (4.71s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-031973 config get cpus: exit status 14 (96.768381ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-031973 config get cpus: exit status 14 (109.076467ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-031973 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-031973 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (193.975521ms)

                                                
                                                
-- stdout --
	* [functional-031973] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22021
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22021-403182/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-403182/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 15:17:28.653117  448995 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:17:28.653226  448995 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:17:28.653234  448995 out.go:374] Setting ErrFile to fd 2...
	I1202 15:17:28.653239  448995 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:17:28.653419  448995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-403182/.minikube/bin
	I1202 15:17:28.653872  448995 out.go:368] Setting JSON to false
	I1202 15:17:28.654873  448995 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":7191,"bootTime":1764681458,"procs":246,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 15:17:28.654932  448995 start.go:143] virtualization: kvm guest
	I1202 15:17:28.656685  448995 out.go:179] * [functional-031973] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 15:17:28.657885  448995 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 15:17:28.657876  448995 notify.go:221] Checking for updates...
	I1202 15:17:28.660225  448995 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 15:17:28.661642  448995 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-403182/kubeconfig
	I1202 15:17:28.662876  448995 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-403182/.minikube
	I1202 15:17:28.664075  448995 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 15:17:28.665266  448995 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 15:17:28.667214  448995 config.go:182] Loaded profile config "functional-031973": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1202 15:17:28.668024  448995 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 15:17:28.695036  448995 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 15:17:28.695247  448995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 15:17:28.772012  448995 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-02 15:17:28.757301964 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 15:17:28.772189  448995 docker.go:319] overlay module found
	I1202 15:17:28.774236  448995 out.go:179] * Using the docker driver based on existing profile
	I1202 15:17:28.775657  448995 start.go:309] selected driver: docker
	I1202 15:17:28.775687  448995 start.go:927] validating driver "docker" against &{Name:functional-031973 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-031973 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 15:17:28.775824  448995 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 15:17:28.777884  448995 out.go:203] 
	W1202 15:17:28.779212  448995 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1202 15:17:28.780525  448995 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-031973 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-031973 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-031973 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (211.108985ms)

                                                
                                                
-- stdout --
	* [functional-031973] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22021
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22021-403182/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-403182/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 15:17:28.461134  448773 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:17:28.461329  448773 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:17:28.461358  448773 out.go:374] Setting ErrFile to fd 2...
	I1202 15:17:28.461365  448773 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:17:28.461758  448773 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-403182/.minikube/bin
	I1202 15:17:28.462235  448773 out.go:368] Setting JSON to false
	I1202 15:17:28.463357  448773 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":7190,"bootTime":1764681458,"procs":246,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 15:17:28.463430  448773 start.go:143] virtualization: kvm guest
	I1202 15:17:28.465421  448773 out.go:179] * [functional-031973] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1202 15:17:28.466821  448773 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 15:17:28.466857  448773 notify.go:221] Checking for updates...
	I1202 15:17:28.469704  448773 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 15:17:28.471385  448773 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-403182/kubeconfig
	I1202 15:17:28.472852  448773 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-403182/.minikube
	I1202 15:17:28.474231  448773 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 15:17:28.477981  448773 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 15:17:28.480191  448773 config.go:182] Loaded profile config "functional-031973": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1202 15:17:28.480832  448773 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 15:17:28.510125  448773 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 15:17:28.510246  448773 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 15:17:28.579539  448773 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-02 15:17:28.569291105 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 15:17:28.579656  448773 docker.go:319] overlay module found
	I1202 15:17:28.581394  448773 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1202 15:17:28.582826  448773 start.go:309] selected driver: docker
	I1202 15:17:28.582843  448773 start.go:927] validating driver "docker" against &{Name:functional-031973 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-031973 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 15:17:28.582945  448773 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 15:17:28.584939  448773 out.go:203] 
	W1202 15:17:28.586190  448773 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1202 15:17:28.588305  448773 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-031973 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-031973 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-hncff" [c3734cd7-1171-446c-a2fb-549536bee534] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-hncff" [c3734cd7-1171-446c-a2fb-549536bee534] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.006833132s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:32476
functional_test.go:1680: http://192.168.49.2:32476: success! body:
Request served by hello-node-connect-7d85dfc575-hncff

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:32476
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.56s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 ssh -n functional-031973 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 cp functional-031973:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2576747610/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 ssh -n functional-031973 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 ssh -n functional-031973 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/406799/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 ssh "sudo cat /etc/test/nested/copy/406799/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/406799.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 ssh "sudo cat /etc/ssl/certs/406799.pem"
E1202 15:17:29.196186  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/addons-371602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/406799.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 ssh "sudo cat /usr/share/ca-certificates/406799.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/4067992.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 ssh "sudo cat /etc/ssl/certs/4067992.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/4067992.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 ssh "sudo cat /usr/share/ca-certificates/4067992.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-031973 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-031973 ssh "sudo systemctl is-active docker": exit status 1 (343.007041ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-031973 ssh "sudo systemctl is-active crio": exit status 1 (390.068927ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-031973 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-031973 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-031973 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 446035: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-031973 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-031973 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-031973 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [646d379a-6455-4b1c-949f-675a20c3e31e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [646d379a-6455-4b1c-949f-675a20c3e31e] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 12.003456861s
I1202 15:17:28.214418  406799 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-031973 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-031973 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-8dm24" [e92ee8a5-d0ea-4416-93f3-ae1a0473cff8] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-8dm24" [e92ee8a5-d0ea-4416-93f3-ae1a0473cff8] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.003916148s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 service list -o json
functional_test.go:1504: Took "513.125051ms" to run "out/minikube-linux-amd64 -p functional-031973 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:31471
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "380.886451ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "68.74595ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31471
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "399.118034ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "72.233364ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-031973 /tmp/TestFunctionalparallelMountCmdany-port2503063865/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1764688647362761372" to /tmp/TestFunctionalparallelMountCmdany-port2503063865/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1764688647362761372" to /tmp/TestFunctionalparallelMountCmdany-port2503063865/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1764688647362761372" to /tmp/TestFunctionalparallelMountCmdany-port2503063865/001/test-1764688647362761372
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-031973 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (311.607206ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1202 15:17:27.674754  406799 retry.go:31] will retry after 640.57203ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  2 15:17 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  2 15:17 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  2 15:17 test-1764688647362761372
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 ssh cat /mount-9p/test-1764688647362761372
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-031973 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [1e3fb65d-b656-465b-b25b-a9049c005155] Pending
helpers_test.go:352: "busybox-mount" [1e3fb65d-b656-465b-b25b-a9049c005155] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [1e3fb65d-b656-465b-b25b-a9049c005155] Running
helpers_test.go:352: "busybox-mount" [1e3fb65d-b656-465b-b25b-a9049c005155] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [1e3fb65d-b656-465b-b25b-a9049c005155] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.00420446s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-031973 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-031973 /tmp/TestFunctionalparallelMountCmdany-port2503063865/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.17s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-031973 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.87.195 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-031973 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-031973 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-031973
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-031973
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-031973 image ls --format short --alsologtostderr:
I1202 15:17:41.968097  454406 out.go:360] Setting OutFile to fd 1 ...
I1202 15:17:41.968495  454406 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:17:41.968509  454406 out.go:374] Setting ErrFile to fd 2...
I1202 15:17:41.968606  454406 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:17:41.969119  454406 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-403182/.minikube/bin
I1202 15:17:41.970195  454406 config.go:182] Loaded profile config "functional-031973": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1202 15:17:41.970320  454406 config.go:182] Loaded profile config "functional-031973": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1202 15:17:41.970775  454406 cli_runner.go:164] Run: docker container inspect functional-031973 --format={{.State.Status}}
I1202 15:17:41.989810  454406 ssh_runner.go:195] Run: systemctl --version
I1202 15:17:41.989864  454406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-031973
I1202 15:17:42.008944  454406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/22021-403182/.minikube/machines/functional-031973/id_rsa Username:docker}
I1202 15:17:42.110483  454406 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-031973 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/library/minikube-local-cache-test │ functional-031973  │ sha256:ae37f5 │ 990B   │
│ registry.k8s.io/etcd                        │ 3.6.5-0            │ sha256:a3e246 │ 22.9MB │
│ registry.k8s.io/kube-apiserver              │ v1.34.2            │ sha256:a5f569 │ 27.1MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.2            │ sha256:8aa150 │ 26MB   │
│ registry.k8s.io/kube-controller-manager     │ v1.34.2            │ sha256:01e8ba │ 22.8MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.2            │ sha256:88320b │ 17.4MB │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:cd073f │ 320kB  │
│ registry.k8s.io/pause                       │ latest             │ sha256:350b16 │ 72.3kB │
│ docker.io/kicbase/echo-server               │ functional-031973  │ sha256:9056ab │ 2.37MB │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:409467 │ 44.4MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:56cc51 │ 2.4MB  │
│ localhost/my-image                          │ functional-031973  │ sha256:3cee9e │ 775kB  │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:0184c1 │ 298kB  │
│ docker.io/library/nginx                     │ alpine             │ sha256:d4918c │ 22.6MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:6e38f4 │ 9.06MB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:52546a │ 22.4MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:da86e6 │ 315kB  │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-031973 image ls --format table --alsologtostderr:
I1202 15:17:46.582558  455311 out.go:360] Setting OutFile to fd 1 ...
I1202 15:17:46.582749  455311 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:17:46.582763  455311 out.go:374] Setting ErrFile to fd 2...
I1202 15:17:46.582770  455311 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:17:46.583005  455311 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-403182/.minikube/bin
I1202 15:17:46.583646  455311 config.go:182] Loaded profile config "functional-031973": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1202 15:17:46.583789  455311 config.go:182] Loaded profile config "functional-031973": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1202 15:17:46.584269  455311 cli_runner.go:164] Run: docker container inspect functional-031973 --format={{.State.Status}}
I1202 15:17:46.604963  455311 ssh_runner.go:195] Run: systemctl --version
I1202 15:17:46.605089  455311 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-031973
I1202 15:17:46.623758  455311 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/22021-403182/.minikube/machines/functional-031973/id_rsa Username:docker}
I1202 15:17:46.724782  455311 ssh_runner.go:195] Run: sudo crictl images --output json
E1202 15:18:10.158124  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/addons-371602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:19:32.080320  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/addons-371602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:21:48.211413  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/addons-371602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:22:15.921831  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/addons-371602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-031973 image ls --format json --alsologtostderr:
[{"id":"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"22871747"},{"id":"sha256:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85","repoDigests":["registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"27060130"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d
311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"44375501"},{"id":"sha256:ae37f536915721b9eb7ce97471d6460e12436118831c328236fe805ec211353c","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-031973"],"size":"990"},{"id":"sha256:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"22818657"},{"id":"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"320448"},{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-031973"],"size":"2372971"},{"id":"sha2
56:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:3cee9e52db7f84d07ffe7229ab4923f5907bad4c745c94867691a4d976c29d2f","repoDigests":[],"repoTags":["localhost/my-image:functional-031973"],"size":"774886"},{"id":"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"22384805"},{"id":"sha256:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45","repoDigests":["registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"25963482"},{"id":"sha256:d4918ca78576a537caa7b0c0430
51c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"22631814"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952","repoDigests":["registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"17382272"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-031973 image ls --format json --alsologtostderr:
I1202 15:17:46.345524  455217 out.go:360] Setting OutFile to fd 1 ...
I1202 15:17:46.345811  455217 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:17:46.345822  455217 out.go:374] Setting ErrFile to fd 2...
I1202 15:17:46.345826  455217 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:17:46.346075  455217 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-403182/.minikube/bin
I1202 15:17:46.346815  455217 config.go:182] Loaded profile config "functional-031973": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1202 15:17:46.346970  455217 config.go:182] Loaded profile config "functional-031973": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1202 15:17:46.347600  455217 cli_runner.go:164] Run: docker container inspect functional-031973 --format={{.State.Status}}
I1202 15:17:46.366805  455217 ssh_runner.go:195] Run: systemctl --version
I1202 15:17:46.366865  455217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-031973
I1202 15:17:46.386601  455217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/22021-403182/.minikube/machines/functional-031973/id_rsa Username:docker}
I1202 15:17:46.486820  455217 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-031973 image ls --format yaml --alsologtostderr:
- id: sha256:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "22818657"
- id: sha256:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "17382272"
- id: sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "320448"
- id: sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "44375501"
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-031973
size: "2372971"
- id: sha256:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45
repoDigests:
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "25963482"
- id: sha256:ae37f536915721b9eb7ce97471d6460e12436118831c328236fe805ec211353c
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-031973
size: "990"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "22384805"
- id: sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "22871747"
- id: sha256:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "27060130"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "22631814"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-031973 image ls --format yaml --alsologtostderr:
I1202 15:17:42.205999  454461 out.go:360] Setting OutFile to fd 1 ...
I1202 15:17:42.206104  454461 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:17:42.206112  454461 out.go:374] Setting ErrFile to fd 2...
I1202 15:17:42.206118  454461 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:17:42.206338  454461 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-403182/.minikube/bin
I1202 15:17:42.206948  454461 config.go:182] Loaded profile config "functional-031973": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1202 15:17:42.207061  454461 config.go:182] Loaded profile config "functional-031973": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1202 15:17:42.207549  454461 cli_runner.go:164] Run: docker container inspect functional-031973 --format={{.State.Status}}
I1202 15:17:42.226206  454461 ssh_runner.go:195] Run: systemctl --version
I1202 15:17:42.226256  454461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-031973
I1202 15:17:42.246311  454461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/22021-403182/.minikube/machines/functional-031973/id_rsa Username:docker}
I1202 15:17:42.345769  454461 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-031973 ssh pgrep buildkitd: exit status 1 (294.079233ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 image build -t localhost/my-image:functional-031973 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-031973 image build -t localhost/my-image:functional-031973 testdata/build --alsologtostderr: (3.361985503s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-031973 image build -t localhost/my-image:functional-031973 testdata/build --alsologtostderr:
I1202 15:17:42.732601  454639 out.go:360] Setting OutFile to fd 1 ...
I1202 15:17:42.732716  454639 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:17:42.732720  454639 out.go:374] Setting ErrFile to fd 2...
I1202 15:17:42.732725  454639 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:17:42.732920  454639 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-403182/.minikube/bin
I1202 15:17:42.733469  454639 config.go:182] Loaded profile config "functional-031973": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1202 15:17:42.734266  454639 config.go:182] Loaded profile config "functional-031973": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1202 15:17:42.734809  454639 cli_runner.go:164] Run: docker container inspect functional-031973 --format={{.State.Status}}
I1202 15:17:42.754131  454639 ssh_runner.go:195] Run: systemctl --version
I1202 15:17:42.754181  454639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-031973
I1202 15:17:42.773292  454639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/22021-403182/.minikube/machines/functional-031973/id_rsa Username:docker}
I1202 15:17:42.873592  454639 build_images.go:162] Building image from path: /tmp/build.862976391.tar
I1202 15:17:42.873687  454639 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1202 15:17:42.882207  454639 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.862976391.tar
I1202 15:17:42.886194  454639 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.862976391.tar: stat -c "%s %y" /var/lib/minikube/build/build.862976391.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.862976391.tar': No such file or directory
I1202 15:17:42.886236  454639 ssh_runner.go:362] scp /tmp/build.862976391.tar --> /var/lib/minikube/build/build.862976391.tar (3072 bytes)
I1202 15:17:42.904812  454639 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.862976391
I1202 15:17:42.913200  454639 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.862976391 -xf /var/lib/minikube/build/build.862976391.tar
I1202 15:17:42.921576  454639 containerd.go:394] Building image: /var/lib/minikube/build/build.862976391
I1202 15:17:42.921701  454639 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.862976391 --local dockerfile=/var/lib/minikube/build/build.862976391 --output type=image,name=localhost/my-image:functional-031973
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.6s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.7s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:7341941f8d7c0a2993c4baf43777b480e5470375f5ae8d2799d9e76054d8209c 0.0s done
#8 exporting config sha256:3cee9e52db7f84d07ffe7229ab4923f5907bad4c745c94867691a4d976c29d2f done
#8 naming to localhost/my-image:functional-031973 done
#8 DONE 0.1s
I1202 15:17:46.006958  454639 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.862976391 --local dockerfile=/var/lib/minikube/build/build.862976391 --output type=image,name=localhost/my-image:functional-031973: (3.085217952s)
I1202 15:17:46.007049  454639 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.862976391
I1202 15:17:46.017313  454639 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.862976391.tar
I1202 15:17:46.025870  454639 build_images.go:218] Built localhost/my-image:functional-031973 from /tmp/build.862976391.tar
I1202 15:17:46.025914  454639 build_images.go:134] succeeded building to: functional-031973
I1202 15:17:46.025921  454639 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.731175395s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-031973
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 image load --daemon kicbase/echo-server:functional-031973 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 image load --daemon kicbase/echo-server:functional-031973 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-031973
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 image load --daemon kicbase/echo-server:functional-031973 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.88s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-031973 /tmp/TestFunctionalparallelMountCmdspecific-port3709509138/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-031973 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (297.156593ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1202 15:17:35.829030  406799 retry.go:31] will retry after 313.276992ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-031973 /tmp/TestFunctionalparallelMountCmdspecific-port3709509138/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-031973 ssh "sudo umount -f /mount-9p": exit status 1 (299.867991ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-031973 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-031973 /tmp/TestFunctionalparallelMountCmdspecific-port3709509138/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 image save kicbase/echo-server:functional-031973 /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-031973 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2409947715/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-031973 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2409947715/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-031973 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2409947715/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-031973 ssh "findmnt -T" /mount1: exit status 1 (391.426819ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1202 15:17:37.654112  406799 retry.go:31] will retry after 321.976065ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-031973 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-031973 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2409947715/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-031973 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2409947715/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-031973 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2409947715/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 image rm kicbase/echo-server:functional-031973 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-031973
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 image save --daemon kicbase/echo-server:functional-031973 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-031973
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-031973 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.50s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-031973
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-031973
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-031973
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22021-403182/.minikube/files/etc/test/nested/copy/406799/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (43.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-748804 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-748804 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: (43.173300821s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (43.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (6.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1202 15:28:27.546997  406799 config.go:182] Loaded profile config "functional-748804": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-748804 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-748804 --alsologtostderr -v=8: (6.320130837s)
functional_test.go:678: soft start took 6.320630757s for "functional-748804" cluster.
I1202 15:28:33.867597  406799 config.go:182] Loaded profile config "functional-748804": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (6.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-748804 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (2.76s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (2.76s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.92s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-748804 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach3634363890/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 cache add minikube-local-cache-test:functional-748804
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-748804 cache add minikube-local-cache-test:functional-748804: (1.601877121s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 cache delete minikube-local-cache-test:functional-748804
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-748804
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.92s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.67s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-748804 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (307.619402ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.67s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 kubectl -- --context functional-748804 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-748804 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (34.89s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-748804 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-748804 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.892471793s)
functional_test.go:776: restart took 34.89262023s for "functional-748804" cluster.
I1202 15:29:16.068986  406799 config.go:182] Loaded profile config "functional-748804": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (34.89s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-748804 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-748804 logs: (1.365671078s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs2842651626/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-748804 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs2842651626/001/logs.txt: (1.428768625s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-748804 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-748804
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-748804: exit status 115 (382.247629ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30253 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-748804 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-748804 config get cpus: exit status 14 (104.076595ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-748804 config get cpus: exit status 14 (97.044382ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-748804 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-748804 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 23 (197.760984ms)

                                                
                                                
-- stdout --
	* [functional-748804] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22021
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22021-403182/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-403182/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 15:29:25.935978  472332 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:29:25.936351  472332 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:29:25.936364  472332 out.go:374] Setting ErrFile to fd 2...
	I1202 15:29:25.936370  472332 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:29:25.936708  472332 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-403182/.minikube/bin
	I1202 15:29:25.937376  472332 out.go:368] Setting JSON to false
	I1202 15:29:25.938693  472332 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":7908,"bootTime":1764681458,"procs":227,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 15:29:25.938774  472332 start.go:143] virtualization: kvm guest
	I1202 15:29:25.940695  472332 out.go:179] * [functional-748804] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 15:29:25.942352  472332 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 15:29:25.942378  472332 notify.go:221] Checking for updates...
	I1202 15:29:25.944718  472332 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 15:29:25.946622  472332 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-403182/kubeconfig
	I1202 15:29:25.948223  472332 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-403182/.minikube
	I1202 15:29:25.949456  472332 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 15:29:25.950709  472332 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 15:29:25.953304  472332 config.go:182] Loaded profile config "functional-748804": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 15:29:25.953979  472332 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 15:29:25.980726  472332 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 15:29:25.980829  472332 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 15:29:26.053343  472332 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-02 15:29:26.041872142 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 15:29:26.053467  472332 docker.go:319] overlay module found
	I1202 15:29:26.056782  472332 out.go:179] * Using the docker driver based on existing profile
	I1202 15:29:26.060899  472332 start.go:309] selected driver: docker
	I1202 15:29:26.060924  472332 start.go:927] validating driver "docker" against &{Name:functional-748804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-748804 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 15:29:26.061075  472332 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 15:29:26.063293  472332 out.go:203] 
	W1202 15:29:26.064757  472332 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1202 15:29:26.066059  472332 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-748804 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-748804 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-748804 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 23 (224.586767ms)

                                                
                                                
-- stdout --
	* [functional-748804] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22021
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22021-403182/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-403182/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 15:29:25.748241  472182 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:29:25.748561  472182 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:29:25.748576  472182 out.go:374] Setting ErrFile to fd 2...
	I1202 15:29:25.748583  472182 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:29:25.749063  472182 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-403182/.minikube/bin
	I1202 15:29:25.749515  472182 out.go:368] Setting JSON to false
	I1202 15:29:25.750746  472182 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":7908,"bootTime":1764681458,"procs":231,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 15:29:25.750814  472182 start.go:143] virtualization: kvm guest
	I1202 15:29:25.753512  472182 out.go:179] * [functional-748804] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1202 15:29:25.755143  472182 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 15:29:25.755128  472182 notify.go:221] Checking for updates...
	I1202 15:29:25.758515  472182 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 15:29:25.761155  472182 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-403182/kubeconfig
	I1202 15:29:25.765383  472182 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-403182/.minikube
	I1202 15:29:25.767723  472182 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 15:29:25.769873  472182 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 15:29:25.772017  472182 config.go:182] Loaded profile config "functional-748804": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 15:29:25.772903  472182 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 15:29:25.809515  472182 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 15:29:25.809640  472182 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 15:29:25.881487  472182 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-02 15:29:25.869324385 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 15:29:25.881676  472182 docker.go:319] overlay module found
	I1202 15:29:25.883624  472182 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1202 15:29:25.885004  472182 start.go:309] selected driver: docker
	I1202 15:29:25.885022  472182 start.go:927] validating driver "docker" against &{Name:functional-748804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-748804 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 15:29:25.885129  472182 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 15:29:25.886985  472182 out.go:203] 
	W1202 15:29:25.888197  472182 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1202 15:29:25.889347  472182 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (1.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (1.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.66s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.66s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (2.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 ssh -n functional-748804 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 cp functional-748804:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp1305142170/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 ssh -n functional-748804 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 ssh -n functional-748804 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (2.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/406799/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 ssh "sudo cat /etc/test/nested/copy/406799/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (2.03s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/406799.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 ssh "sudo cat /etc/ssl/certs/406799.pem"
I1202 15:29:35.104032  406799 detect.go:223] nested VM detected
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/406799.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 ssh "sudo cat /usr/share/ca-certificates/406799.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/4067992.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 ssh "sudo cat /etc/ssl/certs/4067992.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/4067992.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 ssh "sudo cat /usr/share/ca-certificates/4067992.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (2.03s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-748804 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.62s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-748804 ssh "sudo systemctl is-active docker": exit status 1 (304.883148ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-748804 ssh "sudo systemctl is-active crio": exit status 1 (314.489066ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.62s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (9.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-748804 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-748804 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-5758569b79-n9fw4" [825e0df2-e770-45a9-8aad-cf0aa2936171] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-5758569b79-n9fw4" [825e0df2-e770-45a9-8aad-cf0aa2936171] Running
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.004060509s
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (9.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.57s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.57s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (8.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-748804 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo142638695/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1764689363706660173" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo142638695/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1764689363706660173" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo142638695/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1764689363706660173" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo142638695/001/test-1764689363706660173
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-748804 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (357.723395ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1202 15:29:24.064806  406799 retry.go:31] will retry after 560.069694ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  2 15:29 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  2 15:29 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  2 15:29 test-1764689363706660173
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 ssh cat /mount-9p/test-1764689363706660173
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-748804 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [861ada58-f518-4422-acaf-49337872cb3c] Pending
helpers_test.go:352: "busybox-mount" [861ada58-f518-4422-acaf-49337872cb3c] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [861ada58-f518-4422-acaf-49337872cb3c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [861ada58-f518-4422-acaf-49337872cb3c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003912599s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-748804 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-748804 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo142638695/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (8.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "433.404494ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "69.924272ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "390.344606ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "83.37142ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.57s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.57s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-748804 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-748804
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-748804
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-748804 image ls --format short --alsologtostderr:
I1202 15:35:35.886954  481330 out.go:360] Setting OutFile to fd 1 ...
I1202 15:35:35.887076  481330 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:35:35.887088  481330 out.go:374] Setting ErrFile to fd 2...
I1202 15:35:35.887093  481330 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:35:35.887321  481330 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-403182/.minikube/bin
I1202 15:35:35.888035  481330 config.go:182] Loaded profile config "functional-748804": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1202 15:35:35.888209  481330 config.go:182] Loaded profile config "functional-748804": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1202 15:35:35.888710  481330 cli_runner.go:164] Run: docker container inspect functional-748804 --format={{.State.Status}}
I1202 15:35:35.910051  481330 ssh_runner.go:195] Run: systemctl --version
I1202 15:35:35.910095  481330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-748804
I1202 15:35:35.930330  481330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22021-403182/.minikube/machines/functional-748804/id_rsa Username:docker}
I1202 15:35:36.036808  481330 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-748804 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kicbase/echo-server               │ functional-748804  │ sha256:9056ab │ 2.37MB │
│ docker.io/library/minikube-local-cache-test │ functional-748804  │ sha256:ae37f5 │ 990B   │
│ registry.k8s.io/kube-controller-manager     │ v1.35.0-beta.0     │ sha256:45f3cc │ 23.1MB │
│ registry.k8s.io/kube-proxy                  │ v1.35.0-beta.0     │ sha256:8a4ded │ 25.8MB │
│ registry.k8s.io/kube-scheduler              │ v1.35.0-beta.0     │ sha256:7bb621 │ 17.2MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:56cc51 │ 2.4MB  │
│ registry.k8s.io/coredns/coredns             │ v1.13.1            │ sha256:aa5e3e │ 23.6MB │
│ registry.k8s.io/etcd                        │ 3.6.5-0            │ sha256:a3e246 │ 22.9MB │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:0184c1 │ 298kB  │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:409467 │ 44.4MB │
│ registry.k8s.io/kube-apiserver              │ v1.35.0-beta.0     │ sha256:aa9d02 │ 27.7MB │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:cd073f │ 318kB  │
│ registry.k8s.io/pause                       │ latest             │ sha256:350b16 │ 72.3kB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:6e38f4 │ 9.06MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:da86e6 │ 315kB  │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-748804 image ls --format table --alsologtostderr:
I1202 15:35:38.368473  482352 out.go:360] Setting OutFile to fd 1 ...
I1202 15:35:38.368922  482352 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:35:38.368934  482352 out.go:374] Setting ErrFile to fd 2...
I1202 15:35:38.368940  482352 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:35:38.369170  482352 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-403182/.minikube/bin
I1202 15:35:38.369801  482352 config.go:182] Loaded profile config "functional-748804": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1202 15:35:38.369922  482352 config.go:182] Loaded profile config "functional-748804": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1202 15:35:38.370437  482352 cli_runner.go:164] Run: docker container inspect functional-748804 --format={{.State.Status}}
I1202 15:35:38.389612  482352 ssh_runner.go:195] Run: systemctl --version
I1202 15:35:38.389678  482352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-748804
I1202 15:35:38.408796  482352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22021-403182/.minikube/machines/functional-748804/id_rsa Username:docker}
I1202 15:35:38.509916  482352 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-748804 image ls --format json --alsologtostderr:
[{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-748804"],"size":"2372971"},{"id":"sha256:ae37f536915721b9eb7ce97471d6460e12436118831c328236fe805ec211353c","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-748804"],"size":"990"},{"id":"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"23550419"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:7bb6219ddab95bdabbef83f051bee4fdd14b6
f791aaa3121080cb2c58ada2e46","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"17226414"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"44375501"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9057171"},{"id":"sha256:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"23119069"},{"id":"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":[],"repoTags"
:["registry.k8s.io/pause:3.10.1"],"size":"317967"},{"id":"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"22869579"},{"id":"sha256:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"27669846"},{"id":"sha256:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"25785436"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-748804 image ls --format json --alsologtostderr:
I1202 15:35:38.119037  482295 out.go:360] Setting OutFile to fd 1 ...
I1202 15:35:38.119202  482295 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:35:38.119218  482295 out.go:374] Setting ErrFile to fd 2...
I1202 15:35:38.119226  482295 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:35:38.119492  482295 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-403182/.minikube/bin
I1202 15:35:38.120243  482295 config.go:182] Loaded profile config "functional-748804": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1202 15:35:38.120414  482295 config.go:182] Loaded profile config "functional-748804": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1202 15:35:38.121014  482295 cli_runner.go:164] Run: docker container inspect functional-748804 --format={{.State.Status}}
I1202 15:35:38.145402  482295 ssh_runner.go:195] Run: systemctl --version
I1202 15:35:38.145451  482295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-748804
I1202 15:35:38.165167  482295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22021-403182/.minikube/machines/functional-748804/id_rsa Username:docker}
I1202 15:35:38.266272  482295 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-748804 image ls --format yaml --alsologtostderr:
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-748804
size: "2372971"
- id: sha256:ae37f536915721b9eb7ce97471d6460e12436118831c328236fe805ec211353c
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-748804
size: "990"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "22869579"
- id: sha256:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "17226414"
- id: sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "317967"
- id: sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "44375501"
- id: sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "23550419"
- id: sha256:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "27669846"
- id: sha256:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "23119069"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9057171"
- id: sha256:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "25785436"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-748804 image ls --format yaml --alsologtostderr:
I1202 15:35:36.149406  481468 out.go:360] Setting OutFile to fd 1 ...
I1202 15:35:36.149713  481468 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:35:36.149725  481468 out.go:374] Setting ErrFile to fd 2...
I1202 15:35:36.149729  481468 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:35:36.150059  481468 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-403182/.minikube/bin
I1202 15:35:36.150821  481468 config.go:182] Loaded profile config "functional-748804": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1202 15:35:36.150960  481468 config.go:182] Loaded profile config "functional-748804": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1202 15:35:36.151422  481468 cli_runner.go:164] Run: docker container inspect functional-748804 --format={{.State.Status}}
I1202 15:35:36.177845  481468 ssh_runner.go:195] Run: systemctl --version
I1202 15:35:36.177914  481468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-748804
I1202 15:35:36.200075  481468 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22021-403182/.minikube/machines/functional-748804/id_rsa Username:docker}
I1202 15:35:36.301707  481468 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-748804 ssh pgrep buildkitd: exit status 1 (317.959968ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 image build -t localhost/my-image:functional-748804 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-748804 image build -t localhost/my-image:functional-748804 testdata/build --alsologtostderr: (2.90420914s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-748804 image build -t localhost/my-image:functional-748804 testdata/build --alsologtostderr:
I1202 15:35:36.722971  481828 out.go:360] Setting OutFile to fd 1 ...
I1202 15:35:36.723307  481828 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:35:36.723316  481828 out.go:374] Setting ErrFile to fd 2...
I1202 15:35:36.723321  481828 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:35:36.723570  481828 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-403182/.minikube/bin
I1202 15:35:36.724288  481828 config.go:182] Loaded profile config "functional-748804": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1202 15:35:36.724954  481828 config.go:182] Loaded profile config "functional-748804": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1202 15:35:36.725441  481828 cli_runner.go:164] Run: docker container inspect functional-748804 --format={{.State.Status}}
I1202 15:35:36.749131  481828 ssh_runner.go:195] Run: systemctl --version
I1202 15:35:36.749197  481828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-748804
I1202 15:35:36.769902  481828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/22021-403182/.minikube/machines/functional-748804/id_rsa Username:docker}
I1202 15:35:36.873338  481828 build_images.go:162] Building image from path: /tmp/build.3271949532.tar
I1202 15:35:36.873438  481828 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1202 15:35:36.882962  481828 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3271949532.tar
I1202 15:35:36.888081  481828 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3271949532.tar: stat -c "%s %y" /var/lib/minikube/build/build.3271949532.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3271949532.tar': No such file or directory
I1202 15:35:36.888110  481828 ssh_runner.go:362] scp /tmp/build.3271949532.tar --> /var/lib/minikube/build/build.3271949532.tar (3072 bytes)
I1202 15:35:36.911585  481828 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3271949532
I1202 15:35:36.922817  481828 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3271949532 -xf /var/lib/minikube/build/build.3271949532.tar
I1202 15:35:36.932994  481828 containerd.go:394] Building image: /var/lib/minikube/build/build.3271949532
I1202 15:35:36.933079  481828 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3271949532 --local dockerfile=/var/lib/minikube/build/build.3271949532 --output type=image,name=localhost/my-image:functional-748804
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.4s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:2d7e59a2a8a1518243b43ab508c3d3d326b0a8c542c803c7f9ff5282a560f1b9 done
#8 exporting config sha256:05aaff044f3194fc9099fd22d049f4cd103d443abb7f9a4cbe8848f801a0682e done
#8 naming to localhost/my-image:functional-748804 done
#8 DONE 0.1s
I1202 15:35:39.529731  481828 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3271949532 --local dockerfile=/var/lib/minikube/build/build.3271949532 --output type=image,name=localhost/my-image:functional-748804: (2.596605391s)
I1202 15:35:39.529835  481828 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3271949532
I1202 15:35:39.539641  481828 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3271949532.tar
I1202 15:35:39.548490  481828 build_images.go:218] Built localhost/my-image:functional-748804 from /tmp/build.3271949532.tar
I1202 15:35:39.548524  481828 build_images.go:134] succeeded building to: functional-748804
I1202 15:35:39.548531  481828 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 image ls
E1202 15:36:48.211277  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/addons-371602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:37:15.604024  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/functional-031973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:37:43.307907  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/functional-031973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.84s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-748804
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.84s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 image load --daemon kicbase/echo-server:functional-748804 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (1.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 image load --daemon kicbase/echo-server:functional-748804 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (1.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.97s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-748804
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 image load --daemon kicbase/echo-server:functional-748804 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.97s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.86s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-748804 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo413932999/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-748804 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (335.769971ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1202 15:29:32.212844  406799 retry.go:31] will retry after 333.944508ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-748804 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo413932999/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-748804 ssh "sudo umount -f /mount-9p": exit status 1 (324.897207ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-748804 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-748804 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo413932999/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.86s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 image save kicbase/echo-server:functional-748804 /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 service list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 image rm kicbase/echo-server:functional-748804 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.57s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 service list -o json
functional_test.go:1504: Took "573.082791ms" to run "out/minikube-linux-amd64 -p functional-748804 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.57s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.74s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.74s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:30842
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-748804
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 image save --daemon kicbase/echo-server:functional-748804 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-748804
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (2.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-748804 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3072562562/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-748804 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3072562562/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-748804 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3072562562/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-748804 ssh "findmnt -T" /mount1: exit status 1 (449.08263ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1202 15:29:34.184327  406799 retry.go:31] will retry after 669.333506ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-748804 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-748804 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3072562562/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-748804 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3072562562/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-748804 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3072562562/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (2.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 service hello-node --url --format={{.IP}}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30842
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-748804 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-748804 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-748804 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-748804 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 476887: os: process already finished
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-748804 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-748804 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-748804 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-748804
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-748804
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-748804
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (158.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1202 15:41:48.210908  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/addons-371602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:42:15.603804  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/functional-031973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-081067 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (2m37.929220342s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (158.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-081067 kubectl -- rollout status deployment/busybox: (3.643453322s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 kubectl -- exec busybox-7b57f96db7-r6qrt -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 kubectl -- exec busybox-7b57f96db7-rckp9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 kubectl -- exec busybox-7b57f96db7-tx4l6 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 kubectl -- exec busybox-7b57f96db7-r6qrt -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 kubectl -- exec busybox-7b57f96db7-rckp9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 kubectl -- exec busybox-7b57f96db7-tx4l6 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 kubectl -- exec busybox-7b57f96db7-r6qrt -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 kubectl -- exec busybox-7b57f96db7-rckp9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 kubectl -- exec busybox-7b57f96db7-tx4l6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 kubectl -- exec busybox-7b57f96db7-r6qrt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 kubectl -- exec busybox-7b57f96db7-r6qrt -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 kubectl -- exec busybox-7b57f96db7-rckp9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 kubectl -- exec busybox-7b57f96db7-rckp9 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 kubectl -- exec busybox-7b57f96db7-tx4l6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 kubectl -- exec busybox-7b57f96db7-tx4l6 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (24.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-081067 node add --alsologtostderr -v 5: (23.426992761s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (24.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-081067 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (18.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 cp testdata/cp-test.txt ha-081067:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 ssh -n ha-081067 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 cp ha-081067:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2312611940/001/cp-test_ha-081067.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 ssh -n ha-081067 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 cp ha-081067:/home/docker/cp-test.txt ha-081067-m02:/home/docker/cp-test_ha-081067_ha-081067-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 ssh -n ha-081067 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 ssh -n ha-081067-m02 "sudo cat /home/docker/cp-test_ha-081067_ha-081067-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 cp ha-081067:/home/docker/cp-test.txt ha-081067-m03:/home/docker/cp-test_ha-081067_ha-081067-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 ssh -n ha-081067 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 ssh -n ha-081067-m03 "sudo cat /home/docker/cp-test_ha-081067_ha-081067-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 cp ha-081067:/home/docker/cp-test.txt ha-081067-m04:/home/docker/cp-test_ha-081067_ha-081067-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 ssh -n ha-081067 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 ssh -n ha-081067-m04 "sudo cat /home/docker/cp-test_ha-081067_ha-081067-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 cp testdata/cp-test.txt ha-081067-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 ssh -n ha-081067-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 cp ha-081067-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2312611940/001/cp-test_ha-081067-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 ssh -n ha-081067-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 cp ha-081067-m02:/home/docker/cp-test.txt ha-081067:/home/docker/cp-test_ha-081067-m02_ha-081067.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 ssh -n ha-081067-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 ssh -n ha-081067 "sudo cat /home/docker/cp-test_ha-081067-m02_ha-081067.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 cp ha-081067-m02:/home/docker/cp-test.txt ha-081067-m03:/home/docker/cp-test_ha-081067-m02_ha-081067-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 ssh -n ha-081067-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 ssh -n ha-081067-m03 "sudo cat /home/docker/cp-test_ha-081067-m02_ha-081067-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 cp ha-081067-m02:/home/docker/cp-test.txt ha-081067-m04:/home/docker/cp-test_ha-081067-m02_ha-081067-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 ssh -n ha-081067-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 ssh -n ha-081067-m04 "sudo cat /home/docker/cp-test_ha-081067-m02_ha-081067-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 cp testdata/cp-test.txt ha-081067-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 ssh -n ha-081067-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 cp ha-081067-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2312611940/001/cp-test_ha-081067-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 ssh -n ha-081067-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 cp ha-081067-m03:/home/docker/cp-test.txt ha-081067:/home/docker/cp-test_ha-081067-m03_ha-081067.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 ssh -n ha-081067-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 ssh -n ha-081067 "sudo cat /home/docker/cp-test_ha-081067-m03_ha-081067.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 cp ha-081067-m03:/home/docker/cp-test.txt ha-081067-m02:/home/docker/cp-test_ha-081067-m03_ha-081067-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 ssh -n ha-081067-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 ssh -n ha-081067-m02 "sudo cat /home/docker/cp-test_ha-081067-m03_ha-081067-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 cp ha-081067-m03:/home/docker/cp-test.txt ha-081067-m04:/home/docker/cp-test_ha-081067-m03_ha-081067-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 ssh -n ha-081067-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 ssh -n ha-081067-m04 "sudo cat /home/docker/cp-test_ha-081067-m03_ha-081067-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 cp testdata/cp-test.txt ha-081067-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 ssh -n ha-081067-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 cp ha-081067-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2312611940/001/cp-test_ha-081067-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 ssh -n ha-081067-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 cp ha-081067-m04:/home/docker/cp-test.txt ha-081067:/home/docker/cp-test_ha-081067-m04_ha-081067.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 ssh -n ha-081067-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 ssh -n ha-081067 "sudo cat /home/docker/cp-test_ha-081067-m04_ha-081067.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 cp ha-081067-m04:/home/docker/cp-test.txt ha-081067-m02:/home/docker/cp-test_ha-081067-m04_ha-081067-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 ssh -n ha-081067-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 ssh -n ha-081067-m02 "sudo cat /home/docker/cp-test_ha-081067-m04_ha-081067-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 cp ha-081067-m04:/home/docker/cp-test.txt ha-081067-m03:/home/docker/cp-test_ha-081067-m04_ha-081067-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 ssh -n ha-081067-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 ssh -n ha-081067-m03 "sudo cat /home/docker/cp-test_ha-081067-m04_ha-081067-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (18.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-081067 node stop m02 --alsologtostderr -v 5: (12.093477796s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-081067 status --alsologtostderr -v 5: exit status 7 (731.545454ms)

                                                
                                                
-- stdout --
	ha-081067
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-081067-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-081067-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-081067-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 15:43:24.720015  506032 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:43:24.720311  506032 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:43:24.720321  506032 out.go:374] Setting ErrFile to fd 2...
	I1202 15:43:24.720326  506032 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:43:24.720535  506032 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-403182/.minikube/bin
	I1202 15:43:24.720734  506032 out.go:368] Setting JSON to false
	I1202 15:43:24.720762  506032 mustload.go:66] Loading cluster: ha-081067
	I1202 15:43:24.720860  506032 notify.go:221] Checking for updates...
	I1202 15:43:24.721122  506032 config.go:182] Loaded profile config "ha-081067": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1202 15:43:24.721137  506032 status.go:174] checking status of ha-081067 ...
	I1202 15:43:24.721634  506032 cli_runner.go:164] Run: docker container inspect ha-081067 --format={{.State.Status}}
	I1202 15:43:24.742414  506032 status.go:371] ha-081067 host status = "Running" (err=<nil>)
	I1202 15:43:24.742455  506032 host.go:66] Checking if "ha-081067" exists ...
	I1202 15:43:24.742781  506032 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-081067
	I1202 15:43:24.761978  506032 host.go:66] Checking if "ha-081067" exists ...
	I1202 15:43:24.762239  506032 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 15:43:24.762299  506032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-081067
	I1202 15:43:24.783284  506032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33175 SSHKeyPath:/home/jenkins/minikube-integration/22021-403182/.minikube/machines/ha-081067/id_rsa Username:docker}
	I1202 15:43:24.881640  506032 ssh_runner.go:195] Run: systemctl --version
	I1202 15:43:24.888293  506032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 15:43:24.901274  506032 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 15:43:24.962226  506032 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-02 15:43:24.951689977 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 15:43:24.962898  506032 kubeconfig.go:125] found "ha-081067" server: "https://192.168.49.254:8443"
	I1202 15:43:24.962930  506032 api_server.go:166] Checking apiserver status ...
	I1202 15:43:24.962966  506032 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 15:43:24.976188  506032 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1383/cgroup
	W1202 15:43:24.985310  506032 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1383/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1202 15:43:24.985370  506032 ssh_runner.go:195] Run: ls
	I1202 15:43:24.989336  506032 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1202 15:43:24.993996  506032 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1202 15:43:24.994028  506032 status.go:463] ha-081067 apiserver status = Running (err=<nil>)
	I1202 15:43:24.994042  506032 status.go:176] ha-081067 status: &{Name:ha-081067 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 15:43:24.994063  506032 status.go:174] checking status of ha-081067-m02 ...
	I1202 15:43:24.994402  506032 cli_runner.go:164] Run: docker container inspect ha-081067-m02 --format={{.State.Status}}
	I1202 15:43:25.015708  506032 status.go:371] ha-081067-m02 host status = "Stopped" (err=<nil>)
	I1202 15:43:25.015735  506032 status.go:384] host is not running, skipping remaining checks
	I1202 15:43:25.015744  506032 status.go:176] ha-081067-m02 status: &{Name:ha-081067-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 15:43:25.015780  506032 status.go:174] checking status of ha-081067-m03 ...
	I1202 15:43:25.016087  506032 cli_runner.go:164] Run: docker container inspect ha-081067-m03 --format={{.State.Status}}
	I1202 15:43:25.035608  506032 status.go:371] ha-081067-m03 host status = "Running" (err=<nil>)
	I1202 15:43:25.035634  506032 host.go:66] Checking if "ha-081067-m03" exists ...
	I1202 15:43:25.035937  506032 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-081067-m03
	I1202 15:43:25.055440  506032 host.go:66] Checking if "ha-081067-m03" exists ...
	I1202 15:43:25.055922  506032 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 15:43:25.055994  506032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-081067-m03
	I1202 15:43:25.076050  506032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33185 SSHKeyPath:/home/jenkins/minikube-integration/22021-403182/.minikube/machines/ha-081067-m03/id_rsa Username:docker}
	I1202 15:43:25.174251  506032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 15:43:25.187463  506032 kubeconfig.go:125] found "ha-081067" server: "https://192.168.49.254:8443"
	I1202 15:43:25.187497  506032 api_server.go:166] Checking apiserver status ...
	I1202 15:43:25.187540  506032 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 15:43:25.199089  506032 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1277/cgroup
	W1202 15:43:25.207958  506032 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1277/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1202 15:43:25.208013  506032 ssh_runner.go:195] Run: ls
	I1202 15:43:25.212121  506032 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1202 15:43:25.216355  506032 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1202 15:43:25.216384  506032 status.go:463] ha-081067-m03 apiserver status = Running (err=<nil>)
	I1202 15:43:25.216396  506032 status.go:176] ha-081067-m03 status: &{Name:ha-081067-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 15:43:25.216416  506032 status.go:174] checking status of ha-081067-m04 ...
	I1202 15:43:25.216751  506032 cli_runner.go:164] Run: docker container inspect ha-081067-m04 --format={{.State.Status}}
	I1202 15:43:25.236984  506032 status.go:371] ha-081067-m04 host status = "Running" (err=<nil>)
	I1202 15:43:25.237015  506032 host.go:66] Checking if "ha-081067-m04" exists ...
	I1202 15:43:25.237320  506032 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-081067-m04
	I1202 15:43:25.255530  506032 host.go:66] Checking if "ha-081067-m04" exists ...
	I1202 15:43:25.255928  506032 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 15:43:25.255980  506032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-081067-m04
	I1202 15:43:25.275529  506032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33190 SSHKeyPath:/home/jenkins/minikube-integration/22021-403182/.minikube/machines/ha-081067-m04/id_rsa Username:docker}
	I1202 15:43:25.373483  506032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 15:43:25.387490  506032 status.go:176] ha-081067-m04 status: &{Name:ha-081067-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (8.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-081067 node start m02 --alsologtostderr -v 5: (7.97200521s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (8.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (95.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-081067 stop --alsologtostderr -v 5: (37.444840523s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 start --wait true --alsologtostderr -v 5
E1202 15:44:23.349603  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/functional-748804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:44:23.356062  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/functional-748804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:44:23.367455  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/functional-748804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:44:23.388941  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/functional-748804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:44:23.430449  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/functional-748804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:44:23.511968  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/functional-748804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:44:23.673520  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/functional-748804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:44:23.995248  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/functional-748804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:44:24.637027  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/functional-748804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:44:25.920733  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/functional-748804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:44:28.482403  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/functional-748804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:44:33.604718  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/functional-748804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:44:43.846497  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/functional-748804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:45:04.328750  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/functional-748804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-081067 start --wait true --alsologtostderr -v 5: (58.291403048s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (95.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-081067 node delete m03 --alsologtostderr -v 5: (8.682444049s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 stop --alsologtostderr -v 5
E1202 15:45:45.290852  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/functional-748804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-081067 stop --alsologtostderr -v 5: (36.139254826s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-081067 status --alsologtostderr -v 5: exit status 7 (128.26628ms)

                                                
                                                
-- stdout --
	ha-081067
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-081067-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-081067-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 15:45:58.465936  522273 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:45:58.466197  522273 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:45:58.466206  522273 out.go:374] Setting ErrFile to fd 2...
	I1202 15:45:58.466211  522273 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:45:58.466420  522273 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-403182/.minikube/bin
	I1202 15:45:58.466573  522273 out.go:368] Setting JSON to false
	I1202 15:45:58.466598  522273 mustload.go:66] Loading cluster: ha-081067
	I1202 15:45:58.466784  522273 notify.go:221] Checking for updates...
	I1202 15:45:58.467517  522273 config.go:182] Loaded profile config "ha-081067": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1202 15:45:58.467564  522273 status.go:174] checking status of ha-081067 ...
	I1202 15:45:58.468799  522273 cli_runner.go:164] Run: docker container inspect ha-081067 --format={{.State.Status}}
	I1202 15:45:58.488987  522273 status.go:371] ha-081067 host status = "Stopped" (err=<nil>)
	I1202 15:45:58.489007  522273 status.go:384] host is not running, skipping remaining checks
	I1202 15:45:58.489014  522273 status.go:176] ha-081067 status: &{Name:ha-081067 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 15:45:58.489052  522273 status.go:174] checking status of ha-081067-m02 ...
	I1202 15:45:58.489335  522273 cli_runner.go:164] Run: docker container inspect ha-081067-m02 --format={{.State.Status}}
	I1202 15:45:58.508182  522273 status.go:371] ha-081067-m02 host status = "Stopped" (err=<nil>)
	I1202 15:45:58.508204  522273 status.go:384] host is not running, skipping remaining checks
	I1202 15:45:58.508214  522273 status.go:176] ha-081067-m02 status: &{Name:ha-081067-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 15:45:58.508267  522273 status.go:174] checking status of ha-081067-m04 ...
	I1202 15:45:58.508521  522273 cli_runner.go:164] Run: docker container inspect ha-081067-m04 --format={{.State.Status}}
	I1202 15:45:58.527324  522273 status.go:371] ha-081067-m04 host status = "Stopped" (err=<nil>)
	I1202 15:45:58.527351  522273 status.go:384] host is not running, skipping remaining checks
	I1202 15:45:58.527359  522273 status.go:176] ha-081067-m04 status: &{Name:ha-081067-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (52.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1202 15:46:48.210917  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/addons-371602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-081067 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (51.831430021s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (52.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (36.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 node add --control-plane --alsologtostderr -v 5
E1202 15:47:07.212835  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/functional-748804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:47:15.603846  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/functional-031973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-081067 node add --control-plane --alsologtostderr -v 5: (35.782048338s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-081067 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (36.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.95s)

                                                
                                    
x
+
TestJSONOutput/start/Command (40.14s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-788094 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-788094 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (40.138414531s)
--- PASS: TestJSONOutput/start/Command (40.14s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-788094 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-788094 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.9s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-788094 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-788094 --output=json --user=testUser: (5.895372088s)
--- PASS: TestJSONOutput/stop/Command (5.90s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-746470 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-746470 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (87.191357ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"957cec16-3578-4fcf-b4c7-4fc788b69dca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-746470] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3f0dc7f9-32b4-4d5e-a559-2ec10adb2120","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22021"}}
	{"specversion":"1.0","id":"81acf487-ed0d-4b75-bc2c-f5736c51d4f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"adadd637-48e3-4e78-bfb2-dcf94d798213","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22021-403182/kubeconfig"}}
	{"specversion":"1.0","id":"0678d7fb-df5b-4f9b-9f5a-6e5414db5a0c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-403182/.minikube"}}
	{"specversion":"1.0","id":"a0c9c0ec-cad5-4dd5-b650-843b2271a3ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"a5d6fc41-68d0-43f9-973c-0128557a02f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f04510da-49ac-4b13-a5a1-54e7a1e349b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-746470" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-746470
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (33.5s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-121489 --network=
E1202 15:48:38.669296  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/functional-031973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-121489 --network=: (31.306082156s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-121489" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-121489
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-121489: (2.169105575s)
--- PASS: TestKicCustomNetwork/create_custom_network (33.50s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (21.63s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-430030 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-430030 --network=bridge: (19.547929851s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-430030" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-430030
E1202 15:49:23.348801  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/functional-748804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-430030: (2.062929853s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (21.63s)

                                                
                                    
x
+
TestKicExistingNetwork (26.44s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1202 15:49:25.296524  406799 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1202 15:49:25.315697  406799 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1202 15:49:25.315774  406799 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1202 15:49:25.315803  406799 cli_runner.go:164] Run: docker network inspect existing-network
W1202 15:49:25.335262  406799 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1202 15:49:25.335301  406799 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1202 15:49:25.335317  406799 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1202 15:49:25.335468  406799 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1202 15:49:25.353988  406799 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-8484746122f5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:92:11:d6:e4:28:98} reservation:<nil>}
I1202 15:49:25.354464  406799 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0020e21a0}
I1202 15:49:25.354501  406799 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1202 15:49:25.354551  406799 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1202 15:49:25.405358  406799 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-383821 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-383821 --network=existing-network: (24.244456075s)
helpers_test.go:175: Cleaning up "existing-network-383821" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-383821
E1202 15:49:51.054861  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/functional-748804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:49:51.285503  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/addons-371602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-383821: (2.051123381s)
I1202 15:49:51.721625  406799 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (26.44s)

                                                
                                    
x
+
TestKicCustomSubnet (23.95s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-700159 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-700159 --subnet=192.168.60.0/24: (21.720446449s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-700159 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-700159" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-700159
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-700159: (2.201689376s)
--- PASS: TestKicCustomSubnet (23.95s)

                                                
                                    
x
+
TestKicStaticIP (26.28s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-519297 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-519297 --static-ip=192.168.200.200: (23.911402068s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-519297 ip
helpers_test.go:175: Cleaning up "static-ip-519297" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-519297
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-519297: (2.211763177s)
--- PASS: TestKicStaticIP (26.28s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (48.06s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-682287 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-682287 --driver=docker  --container-runtime=containerd: (22.888148009s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-685059 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-685059 --driver=docker  --container-runtime=containerd: (19.436481726s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-682287
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-685059
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-685059" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-685059
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-685059: (1.997659595s)
helpers_test.go:175: Cleaning up "first-682287" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-682287
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-682287: (2.421384749s)
--- PASS: TestMinikubeProfile (48.06s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.48s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-362967 --memory=3072 --mount-string /tmp/TestMountStartserial1458742043/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-362967 --memory=3072 --mount-string /tmp/TestMountStartserial1458742043/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.479686242s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.48s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-362967 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.62s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-381775 --memory=3072 --mount-string /tmp/TestMountStartserial1458742043/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-381775 --memory=3072 --mount-string /tmp/TestMountStartserial1458742043/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.623812924s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-381775 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-362967 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-362967 --alsologtostderr -v=5: (1.716283184s)
--- PASS: TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-381775 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-381775
E1202 15:51:48.211261  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/addons-371602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-381775: (1.276080858s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.9s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-381775
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-381775: (6.904416076s)
--- PASS: TestMountStart/serial/RestartStopped (7.90s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-381775 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.30s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (65.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-937665 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1202 15:52:15.605487  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/functional-031973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-937665 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m4.631553808s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-937665 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (65.14s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-937665 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-937665 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-937665 -- rollout status deployment/busybox: (3.161917316s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-937665 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-937665 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-937665 -- exec busybox-7b57f96db7-qqksz -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-937665 -- exec busybox-7b57f96db7-rbbjz -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-937665 -- exec busybox-7b57f96db7-qqksz -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-937665 -- exec busybox-7b57f96db7-rbbjz -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-937665 -- exec busybox-7b57f96db7-qqksz -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-937665 -- exec busybox-7b57f96db7-rbbjz -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.76s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-937665 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-937665 -- exec busybox-7b57f96db7-qqksz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-937665 -- exec busybox-7b57f96db7-qqksz -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-937665 -- exec busybox-7b57f96db7-rbbjz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-937665 -- exec busybox-7b57f96db7-rbbjz -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.87s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (23.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-937665 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-937665 -v=5 --alsologtostderr: (22.407415014s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-937665 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (23.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-937665 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.70s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-937665 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-937665 cp testdata/cp-test.txt multinode-937665:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-937665 ssh -n multinode-937665 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-937665 cp multinode-937665:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile503178335/001/cp-test_multinode-937665.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-937665 ssh -n multinode-937665 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-937665 cp multinode-937665:/home/docker/cp-test.txt multinode-937665-m02:/home/docker/cp-test_multinode-937665_multinode-937665-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-937665 ssh -n multinode-937665 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-937665 ssh -n multinode-937665-m02 "sudo cat /home/docker/cp-test_multinode-937665_multinode-937665-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-937665 cp multinode-937665:/home/docker/cp-test.txt multinode-937665-m03:/home/docker/cp-test_multinode-937665_multinode-937665-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-937665 ssh -n multinode-937665 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-937665 ssh -n multinode-937665-m03 "sudo cat /home/docker/cp-test_multinode-937665_multinode-937665-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-937665 cp testdata/cp-test.txt multinode-937665-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-937665 ssh -n multinode-937665-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-937665 cp multinode-937665-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile503178335/001/cp-test_multinode-937665-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-937665 ssh -n multinode-937665-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-937665 cp multinode-937665-m02:/home/docker/cp-test.txt multinode-937665:/home/docker/cp-test_multinode-937665-m02_multinode-937665.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-937665 ssh -n multinode-937665-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-937665 ssh -n multinode-937665 "sudo cat /home/docker/cp-test_multinode-937665-m02_multinode-937665.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-937665 cp multinode-937665-m02:/home/docker/cp-test.txt multinode-937665-m03:/home/docker/cp-test_multinode-937665-m02_multinode-937665-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-937665 ssh -n multinode-937665-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-937665 ssh -n multinode-937665-m03 "sudo cat /home/docker/cp-test_multinode-937665-m02_multinode-937665-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-937665 cp testdata/cp-test.txt multinode-937665-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-937665 ssh -n multinode-937665-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-937665 cp multinode-937665-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile503178335/001/cp-test_multinode-937665-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-937665 ssh -n multinode-937665-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-937665 cp multinode-937665-m03:/home/docker/cp-test.txt multinode-937665:/home/docker/cp-test_multinode-937665-m03_multinode-937665.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-937665 ssh -n multinode-937665-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-937665 ssh -n multinode-937665 "sudo cat /home/docker/cp-test_multinode-937665-m03_multinode-937665.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-937665 cp multinode-937665-m03:/home/docker/cp-test.txt multinode-937665-m02:/home/docker/cp-test_multinode-937665-m03_multinode-937665-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-937665 ssh -n multinode-937665-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-937665 ssh -n multinode-937665-m02 "sudo cat /home/docker/cp-test_multinode-937665-m03_multinode-937665-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.50s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-937665 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-937665 node stop m03: (1.281550549s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-937665 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-937665 status: exit status 7 (529.00149ms)

                                                
                                                
-- stdout --
	multinode-937665
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-937665-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-937665-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-937665 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-937665 status --alsologtostderr: exit status 7 (522.408087ms)

                                                
                                                
-- stdout --
	multinode-937665
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-937665-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-937665-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 15:53:46.215520  584317 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:53:46.215829  584317 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:53:46.215840  584317 out.go:374] Setting ErrFile to fd 2...
	I1202 15:53:46.215846  584317 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:53:46.216109  584317 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-403182/.minikube/bin
	I1202 15:53:46.216336  584317 out.go:368] Setting JSON to false
	I1202 15:53:46.216369  584317 mustload.go:66] Loading cluster: multinode-937665
	I1202 15:53:46.216505  584317 notify.go:221] Checking for updates...
	I1202 15:53:46.216801  584317 config.go:182] Loaded profile config "multinode-937665": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1202 15:53:46.216820  584317 status.go:174] checking status of multinode-937665 ...
	I1202 15:53:46.217351  584317 cli_runner.go:164] Run: docker container inspect multinode-937665 --format={{.State.Status}}
	I1202 15:53:46.236893  584317 status.go:371] multinode-937665 host status = "Running" (err=<nil>)
	I1202 15:53:46.236922  584317 host.go:66] Checking if "multinode-937665" exists ...
	I1202 15:53:46.237221  584317 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-937665
	I1202 15:53:46.257184  584317 host.go:66] Checking if "multinode-937665" exists ...
	I1202 15:53:46.257468  584317 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 15:53:46.257517  584317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-937665
	I1202 15:53:46.275902  584317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33295 SSHKeyPath:/home/jenkins/minikube-integration/22021-403182/.minikube/machines/multinode-937665/id_rsa Username:docker}
	I1202 15:53:46.374553  584317 ssh_runner.go:195] Run: systemctl --version
	I1202 15:53:46.381134  584317 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 15:53:46.393781  584317 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 15:53:46.453215  584317 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-02 15:53:46.443469585 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 15:53:46.453788  584317 kubeconfig.go:125] found "multinode-937665" server: "https://192.168.67.2:8443"
	I1202 15:53:46.453818  584317 api_server.go:166] Checking apiserver status ...
	I1202 15:53:46.453857  584317 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 15:53:46.466104  584317 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1306/cgroup
	W1202 15:53:46.474974  584317 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1306/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1202 15:53:46.475111  584317 ssh_runner.go:195] Run: ls
	I1202 15:53:46.479292  584317 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1202 15:53:46.486882  584317 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1202 15:53:46.486906  584317 status.go:463] multinode-937665 apiserver status = Running (err=<nil>)
	I1202 15:53:46.486916  584317 status.go:176] multinode-937665 status: &{Name:multinode-937665 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 15:53:46.486940  584317 status.go:174] checking status of multinode-937665-m02 ...
	I1202 15:53:46.487206  584317 cli_runner.go:164] Run: docker container inspect multinode-937665-m02 --format={{.State.Status}}
	I1202 15:53:46.505612  584317 status.go:371] multinode-937665-m02 host status = "Running" (err=<nil>)
	I1202 15:53:46.505638  584317 host.go:66] Checking if "multinode-937665-m02" exists ...
	I1202 15:53:46.505934  584317 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-937665-m02
	I1202 15:53:46.523700  584317 host.go:66] Checking if "multinode-937665-m02" exists ...
	I1202 15:53:46.523989  584317 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 15:53:46.524057  584317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-937665-m02
	I1202 15:53:46.542208  584317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33300 SSHKeyPath:/home/jenkins/minikube-integration/22021-403182/.minikube/machines/multinode-937665-m02/id_rsa Username:docker}
	I1202 15:53:46.639507  584317 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 15:53:46.651902  584317 status.go:176] multinode-937665-m02 status: &{Name:multinode-937665-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1202 15:53:46.651937  584317 status.go:174] checking status of multinode-937665-m03 ...
	I1202 15:53:46.652228  584317 cli_runner.go:164] Run: docker container inspect multinode-937665-m03 --format={{.State.Status}}
	I1202 15:53:46.671087  584317 status.go:371] multinode-937665-m03 host status = "Stopped" (err=<nil>)
	I1202 15:53:46.671110  584317 status.go:384] host is not running, skipping remaining checks
	I1202 15:53:46.671117  584317 status.go:176] multinode-937665-m03 status: &{Name:multinode-937665-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.33s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-937665 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-937665 node start m03 -v=5 --alsologtostderr: (6.276034685s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-937665 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.02s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (75.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-937665
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-937665
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-937665: (25.071830641s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-937665 --wait=true -v=5 --alsologtostderr
E1202 15:54:23.348652  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/functional-748804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-937665 --wait=true -v=5 --alsologtostderr: (50.63802614s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-937665
--- PASS: TestMultiNode/serial/RestartKeepsNodes (75.84s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-937665 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-937665 node delete m03: (4.709194034s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-937665 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.34s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-937665 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-937665 stop: (23.901286705s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-937665 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-937665 status: exit status 7 (109.784404ms)

                                                
                                                
-- stdout --
	multinode-937665
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-937665-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-937665 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-937665 status --alsologtostderr: exit status 7 (102.680534ms)

                                                
                                                
-- stdout --
	multinode-937665
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-937665-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 15:55:38.945982  594062 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:55:38.946077  594062 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:55:38.946082  594062 out.go:374] Setting ErrFile to fd 2...
	I1202 15:55:38.946086  594062 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:55:38.946312  594062 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-403182/.minikube/bin
	I1202 15:55:38.946527  594062 out.go:368] Setting JSON to false
	I1202 15:55:38.946554  594062 mustload.go:66] Loading cluster: multinode-937665
	I1202 15:55:38.946703  594062 notify.go:221] Checking for updates...
	I1202 15:55:38.947014  594062 config.go:182] Loaded profile config "multinode-937665": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1202 15:55:38.947033  594062 status.go:174] checking status of multinode-937665 ...
	I1202 15:55:38.947535  594062 cli_runner.go:164] Run: docker container inspect multinode-937665 --format={{.State.Status}}
	I1202 15:55:38.966518  594062 status.go:371] multinode-937665 host status = "Stopped" (err=<nil>)
	I1202 15:55:38.966540  594062 status.go:384] host is not running, skipping remaining checks
	I1202 15:55:38.966546  594062 status.go:176] multinode-937665 status: &{Name:multinode-937665 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 15:55:38.966588  594062 status.go:174] checking status of multinode-937665-m02 ...
	I1202 15:55:38.966872  594062 cli_runner.go:164] Run: docker container inspect multinode-937665-m02 --format={{.State.Status}}
	I1202 15:55:38.985251  594062 status.go:371] multinode-937665-m02 host status = "Stopped" (err=<nil>)
	I1202 15:55:38.985275  594062 status.go:384] host is not running, skipping remaining checks
	I1202 15:55:38.985282  594062 status.go:176] multinode-937665-m02 status: &{Name:multinode-937665-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.11s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (45.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-937665 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-937665 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (44.82761496s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-937665 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (45.46s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (26.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-937665
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-937665-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-937665-m02 --driver=docker  --container-runtime=containerd: exit status 14 (83.035616ms)

                                                
                                                
-- stdout --
	* [multinode-937665-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22021
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22021-403182/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-403182/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-937665-m02' is duplicated with machine name 'multinode-937665-m02' in profile 'multinode-937665'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-937665-m03 --driver=docker  --container-runtime=containerd
E1202 15:56:48.210566  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/addons-371602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-937665-m03 --driver=docker  --container-runtime=containerd: (23.732835065s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-937665
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-937665: exit status 80 (328.794175ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-937665 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-937665-m03 already exists in multinode-937665-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-937665-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-937665-m03: (2.005795845s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (26.22s)

                                                
                                    
x
+
TestPreload (112.36s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-545531 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd
E1202 15:57:15.603921  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/functional-031973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-545531 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd: (49.929354056s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-545531 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 -p test-preload-545531 image pull gcr.io/k8s-minikube/busybox: (2.344965911s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-545531
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-545531: (6.760048709s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-545531 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-545531 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (50.629904697s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-545531 image list
helpers_test.go:175: Cleaning up "test-preload-545531" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-545531
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-545531: (2.45439704s)
--- PASS: TestPreload (112.36s)

                                                
                                    
x
+
TestScheduledStopUnix (99.6s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-602290 --memory=3072 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-602290 --memory=3072 --driver=docker  --container-runtime=containerd: (22.8748676s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-602290 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1202 15:59:10.259413  612354 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:59:10.259534  612354 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:59:10.259545  612354 out.go:374] Setting ErrFile to fd 2...
	I1202 15:59:10.259551  612354 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:59:10.259791  612354 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-403182/.minikube/bin
	I1202 15:59:10.260061  612354 out.go:368] Setting JSON to false
	I1202 15:59:10.260171  612354 mustload.go:66] Loading cluster: scheduled-stop-602290
	I1202 15:59:10.260620  612354 config.go:182] Loaded profile config "scheduled-stop-602290": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1202 15:59:10.260758  612354 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/scheduled-stop-602290/config.json ...
	I1202 15:59:10.260998  612354 mustload.go:66] Loading cluster: scheduled-stop-602290
	I1202 15:59:10.261115  612354 config.go:182] Loaded profile config "scheduled-stop-602290": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-602290 -n scheduled-stop-602290
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-602290 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1202 15:59:10.684229  612504 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:59:10.684528  612504 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:59:10.684539  612504 out.go:374] Setting ErrFile to fd 2...
	I1202 15:59:10.684546  612504 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:59:10.684786  612504 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-403182/.minikube/bin
	I1202 15:59:10.685062  612504 out.go:368] Setting JSON to false
	I1202 15:59:10.685320  612504 daemonize_unix.go:73] killing process 612389 as it is an old scheduled stop
	I1202 15:59:10.685434  612504 mustload.go:66] Loading cluster: scheduled-stop-602290
	I1202 15:59:10.685813  612504 config.go:182] Loaded profile config "scheduled-stop-602290": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1202 15:59:10.685901  612504 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/scheduled-stop-602290/config.json ...
	I1202 15:59:10.686126  612504 mustload.go:66] Loading cluster: scheduled-stop-602290
	I1202 15:59:10.686285  612504 config.go:182] Loaded profile config "scheduled-stop-602290": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1202 15:59:10.691557  406799 retry.go:31] will retry after 66.882µs: open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/scheduled-stop-602290/pid: no such file or directory
I1202 15:59:10.692730  406799 retry.go:31] will retry after 181.959µs: open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/scheduled-stop-602290/pid: no such file or directory
I1202 15:59:10.693891  406799 retry.go:31] will retry after 139.83µs: open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/scheduled-stop-602290/pid: no such file or directory
I1202 15:59:10.695024  406799 retry.go:31] will retry after 274.677µs: open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/scheduled-stop-602290/pid: no such file or directory
I1202 15:59:10.696183  406799 retry.go:31] will retry after 301.367µs: open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/scheduled-stop-602290/pid: no such file or directory
I1202 15:59:10.697320  406799 retry.go:31] will retry after 623.339µs: open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/scheduled-stop-602290/pid: no such file or directory
I1202 15:59:10.698476  406799 retry.go:31] will retry after 1.219502ms: open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/scheduled-stop-602290/pid: no such file or directory
I1202 15:59:10.700662  406799 retry.go:31] will retry after 2.000845ms: open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/scheduled-stop-602290/pid: no such file or directory
I1202 15:59:10.702915  406799 retry.go:31] will retry after 1.496073ms: open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/scheduled-stop-602290/pid: no such file or directory
I1202 15:59:10.705176  406799 retry.go:31] will retry after 3.797755ms: open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/scheduled-stop-602290/pid: no such file or directory
I1202 15:59:10.709426  406799 retry.go:31] will retry after 4.211826ms: open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/scheduled-stop-602290/pid: no such file or directory
I1202 15:59:10.714726  406799 retry.go:31] will retry after 11.216959ms: open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/scheduled-stop-602290/pid: no such file or directory
I1202 15:59:10.727005  406799 retry.go:31] will retry after 8.743896ms: open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/scheduled-stop-602290/pid: no such file or directory
I1202 15:59:10.736338  406799 retry.go:31] will retry after 21.70207ms: open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/scheduled-stop-602290/pid: no such file or directory
I1202 15:59:10.758646  406799 retry.go:31] will retry after 36.610287ms: open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/scheduled-stop-602290/pid: no such file or directory
I1202 15:59:10.795985  406799 retry.go:31] will retry after 62.523273ms: open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/scheduled-stop-602290/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-602290 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
E1202 15:59:23.352576  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/functional-748804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-602290 -n scheduled-stop-602290
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-602290
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-602290 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1202 15:59:36.663696  613377 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:59:36.663814  613377 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:59:36.663829  613377 out.go:374] Setting ErrFile to fd 2...
	I1202 15:59:36.663835  613377 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:59:36.664021  613377 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-403182/.minikube/bin
	I1202 15:59:36.664284  613377 out.go:368] Setting JSON to false
	I1202 15:59:36.664367  613377 mustload.go:66] Loading cluster: scheduled-stop-602290
	I1202 15:59:36.664703  613377 config.go:182] Loaded profile config "scheduled-stop-602290": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1202 15:59:36.664774  613377 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/scheduled-stop-602290/config.json ...
	I1202 15:59:36.664967  613377 mustload.go:66] Loading cluster: scheduled-stop-602290
	I1202 15:59:36.665068  613377 config.go:182] Loaded profile config "scheduled-stop-602290": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-602290
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-602290: exit status 7 (85.126159ms)

                                                
                                                
-- stdout --
	scheduled-stop-602290
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-602290 -n scheduled-stop-602290
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-602290 -n scheduled-stop-602290: exit status 7 (86.141528ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-602290" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-602290
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-602290: (5.075760319s)
--- PASS: TestScheduledStopUnix (99.60s)

                                                
                                    
x
+
TestInsufficientStorage (11.86s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-878432 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-878432 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (9.304019031s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3e6c9abf-fedb-4f71-b0db-516e5de13ff0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-878432] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"aa517faf-f4e1-479f-adc9-83fab7e96029","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22021"}}
	{"specversion":"1.0","id":"f0c39807-3e74-4d96-9207-f07a745bee94","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4f523497-15ff-42b6-a845-c72154493406","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22021-403182/kubeconfig"}}
	{"specversion":"1.0","id":"0a4fbe48-a840-4287-9eca-e194b997b082","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-403182/.minikube"}}
	{"specversion":"1.0","id":"58094c0c-e2ad-4826-8cce-53059c21d51c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"3720c8e2-fe8c-4349-9a86-129ea487d6fd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e8845c66-e740-4177-af51-a9e3bc0ce013","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"5e7d26b3-166a-4684-a72c-25884c9f2f8d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"a08a47e9-7652-452c-8fa9-b27c6b251fd5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"01c2d63b-ec06-4d89-8a92-8c57aee6b0a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"c2360a9c-ad4d-445a-be0c-8d164a3be9a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-878432\" primary control-plane node in \"insufficient-storage-878432\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"592cefa1-dcff-4745-8b35-73b63c312c3e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1764169655-21974 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"67a2ab67-e6be-477a-8972-654517298ae4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"9a0748b5-10dc-45a3-9414-132f4431b640","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-878432 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-878432 --output=json --layout=cluster: exit status 7 (308.002397ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-878432","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-878432","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1202 16:00:36.520202  615648 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-878432" does not appear in /home/jenkins/minikube-integration/22021-403182/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-878432 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-878432 --output=json --layout=cluster: exit status 7 (307.12782ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-878432","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-878432","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1202 16:00:36.828342  615761 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-878432" does not appear in /home/jenkins/minikube-integration/22021-403182/kubeconfig
	E1202 16:00:36.839063  615761 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/insufficient-storage-878432/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-878432" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-878432
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-878432: (1.937485841s)
--- PASS: TestInsufficientStorage (11.86s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (47.66s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.1614895830 start -p running-upgrade-143999 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.1614895830 start -p running-upgrade-143999 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (20.953266141s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-143999 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-143999 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (21.062628156s)
helpers_test.go:175: Cleaning up "running-upgrade-143999" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-143999
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-143999: (2.386480835s)
--- PASS: TestRunningBinaryUpgrade (47.66s)

                                                
                                    
x
+
TestKubernetesUpgrade (322.19s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-642005 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-642005 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (24.85451591s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-642005
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-642005: (1.294740268s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-642005 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-642005 status --format={{.Host}}: exit status 7 (85.713523ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-642005 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-642005 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m47.539479567s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-642005 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-642005 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-642005 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 106 (97.699938ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-642005] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22021
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22021-403182/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-403182/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-beta.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-642005
	    minikube start -p kubernetes-upgrade-642005 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6420052 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-642005 --kubernetes-version=v1.35.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-642005 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-642005 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (5.832497139s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-642005" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-642005
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-642005: (2.420648959s)
--- PASS: TestKubernetesUpgrade (322.19s)

                                                
                                    
x
+
TestMissingContainerUpgrade (83.75s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.2514713384 start -p missing-upgrade-911113 --memory=3072 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.2514713384 start -p missing-upgrade-911113 --memory=3072 --driver=docker  --container-runtime=containerd: (24.641425861s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-911113
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-911113
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-911113 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-911113 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (52.71173099s)
helpers_test.go:175: Cleaning up "missing-upgrade-911113" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-911113
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-911113: (2.03135957s)
--- PASS: TestMissingContainerUpgrade (83.75s)

                                                
                                    
x
+
TestPause/serial/Start (50.08s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-970863 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-970863 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (50.076619427s)
--- PASS: TestPause/serial/Start (50.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-915776 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-915776 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (93.646257ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-915776] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22021
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22021-403182/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-403182/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (26.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-915776 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-915776 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (25.950176919s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-915776 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (26.34s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.68s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-970863 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-970863 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.665077864s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.68s)

                                                
                                    
x
+
TestPause/serial/Pause (0.67s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-970863 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.67s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.36s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-970863 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-970863 --output=json --layout=cluster: exit status 2 (357.021184ms)

                                                
                                                
-- stdout --
	{"Name":"pause-970863","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-970863","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.36s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.73s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-970863 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.73s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.84s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-970863 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.84s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.95s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-970863 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-970863 --alsologtostderr -v=5: (2.949245636s)
--- PASS: TestPause/serial/DeletePaused (2.95s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.68s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-970863
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-970863: exit status 1 (19.399607ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-970863: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.68s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.31s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.31s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (319.17s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.992752855 start -p stopped-upgrade-792276 --memory=3072 --vm-driver=docker  --container-runtime=containerd
E1202 16:01:48.211284  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/addons-371602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.992752855 start -p stopped-upgrade-792276 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (55.268251321s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.992752855 -p stopped-upgrade-792276 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.992752855 -p stopped-upgrade-792276 stop: (1.847498022s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-792276 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-792276 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m22.050995848s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (319.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-915776 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-915776 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (6.174273977s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-915776 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-915776 status -o json: exit status 2 (340.679029ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-915776","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-915776
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-915776: (2.240376744s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-915776 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-915776 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (9.70885788s)
--- PASS: TestNoKubernetes/serial/Start (9.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22021-403182/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-915776 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-915776 "sudo systemctl is-active --quiet service kubelet": exit status 1 (354.593594ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-915776
E1202 16:02:15.604147  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/functional-031973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-915776: (2.119264273s)
--- PASS: TestNoKubernetes/serial/Stop (2.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-915776 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-915776 --driver=docker  --container-runtime=containerd: (7.52884721s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-915776 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-915776 "sudo systemctl is-active --quiet service kubelet": exit status 1 (300.69364ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-373254 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-373254 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (175.53959ms)

                                                
                                                
-- stdout --
	* [false-373254] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22021
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22021-403182/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-403182/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 16:03:40.383495  662791 out.go:360] Setting OutFile to fd 1 ...
	I1202 16:03:40.383633  662791 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 16:03:40.383644  662791 out.go:374] Setting ErrFile to fd 2...
	I1202 16:03:40.383650  662791 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 16:03:40.383890  662791 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-403182/.minikube/bin
	I1202 16:03:40.384412  662791 out.go:368] Setting JSON to false
	I1202 16:03:40.385687  662791 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":9962,"bootTime":1764681458,"procs":317,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 16:03:40.385839  662791 start.go:143] virtualization: kvm guest
	I1202 16:03:40.388107  662791 out.go:179] * [false-373254] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 16:03:40.389560  662791 notify.go:221] Checking for updates...
	I1202 16:03:40.389818  662791 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 16:03:40.391401  662791 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 16:03:40.392798  662791 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-403182/kubeconfig
	I1202 16:03:40.394262  662791 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-403182/.minikube
	I1202 16:03:40.398898  662791 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 16:03:40.400386  662791 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 16:03:40.402148  662791 config.go:182] Loaded profile config "cert-expiration-016294": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1202 16:03:40.402248  662791 config.go:182] Loaded profile config "kubernetes-upgrade-642005": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 16:03:40.402321  662791 config.go:182] Loaded profile config "stopped-upgrade-792276": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I1202 16:03:40.402404  662791 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 16:03:40.427027  662791 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 16:03:40.427143  662791 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 16:03:40.487142  662791 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-02 16:03:40.477117131 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 16:03:40.487270  662791 docker.go:319] overlay module found
	I1202 16:03:40.488996  662791 out.go:179] * Using the docker driver based on user configuration
	I1202 16:03:40.490198  662791 start.go:309] selected driver: docker
	I1202 16:03:40.490217  662791 start.go:927] validating driver "docker" against <nil>
	I1202 16:03:40.490228  662791 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 16:03:40.492205  662791 out.go:203] 
	W1202 16:03:40.493494  662791 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1202 16:03:40.494976  662791 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-373254 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-373254

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-373254

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-373254

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-373254

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-373254

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-373254

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-373254

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-373254

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-373254

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-373254

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373254"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373254"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373254"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-373254

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373254"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373254"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-373254" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-373254" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-373254" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-373254" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-373254" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-373254" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-373254" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-373254" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373254"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373254"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373254"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373254"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373254"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-373254" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-373254" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-373254" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373254"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373254"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373254"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373254"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373254"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22021-403182/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 02 Dec 2025 16:01:11 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: cert-expiration-016294
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22021-403182/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 02 Dec 2025 16:03:11 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-642005
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22021-403182/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 02 Dec 2025 16:02:50 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: stopped-upgrade-792276
contexts:
- context:
cluster: cert-expiration-016294
extensions:
- extension:
last-update: Tue, 02 Dec 2025 16:01:11 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-016294
name: cert-expiration-016294
- context:
cluster: kubernetes-upgrade-642005
user: kubernetes-upgrade-642005
name: kubernetes-upgrade-642005
- context:
cluster: stopped-upgrade-792276
user: stopped-upgrade-792276
name: stopped-upgrade-792276
current-context: ""
kind: Config
users:
- name: cert-expiration-016294
user:
client-certificate: /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/cert-expiration-016294/client.crt
client-key: /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/cert-expiration-016294/client.key
- name: kubernetes-upgrade-642005
user:
client-certificate: /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/kubernetes-upgrade-642005/client.crt
client-key: /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/kubernetes-upgrade-642005/client.key
- name: stopped-upgrade-792276
user:
client-certificate: /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/stopped-upgrade-792276/client.crt
client-key: /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/stopped-upgrade-792276/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-373254

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373254"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373254"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373254"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373254"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373254"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373254"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373254"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373254"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373254"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373254"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373254"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373254"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373254"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373254"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373254"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373254"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373254"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-373254"

                                                
                                                
----------------------- debugLogs end: false-373254 [took: 3.426075475s] --------------------------------
helpers_test.go:175: Cleaning up "false-373254" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-373254
--- PASS: TestNetworkPlugins/group/false (3.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (42.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-373254 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
E1202 16:04:23.349169  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/functional-748804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-373254 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (42.476079642s)
--- PASS: TestNetworkPlugins/group/auto/Start (42.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (72.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-373254 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-373254 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m12.21673964s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (72.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-373254 "pgrep -a kubelet"
I1202 16:05:02.964233  406799 config.go:182] Loaded profile config "auto-373254": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-373254 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-kdf27" [a759663c-f87c-403e-a8cb-57e819d7751e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-kdf27" [a759663c-f87c-403e-a8cb-57e819d7751e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.004552672s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-373254 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-373254 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-373254 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (53.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-373254 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-373254 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (53.775285626s)
--- PASS: TestNetworkPlugins/group/calico/Start (53.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-tqxdw" [f3f5cddc-aaf6-445e-845e-dec165d602a3] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004140884s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-373254 "pgrep -a kubelet"
I1202 16:05:54.305041  406799 config.go:182] Loaded profile config "kindnet-373254": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-373254 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-d4rtj" [76533e93-eecc-4e2e-98ec-c1fe01ddd660] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-d4rtj" [76533e93-eecc-4e2e-98ec-c1fe01ddd660] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004970725s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-373254 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-373254 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-373254 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (51.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-373254 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-373254 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (51.053195711s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (51.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-vdzkp" [b9438c53-d7ba-40fc-9daa-126d48803df3] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003711154s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-373254 "pgrep -a kubelet"
E1202 16:06:31.287445  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/addons-371602/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1202 16:06:31.385889  406799 config.go:182] Loaded profile config "calico-373254": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-373254 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-sb5tg" [7feb279c-c394-4677-8ef5-f1be9d2a908a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-sb5tg" [7feb279c-c394-4677-8ef5-f1be9d2a908a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.004333689s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-373254 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-373254 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-373254 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (59.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-373254 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-373254 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (59.540559857s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (59.54s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.61s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-792276
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-792276: (1.609049268s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (52.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-373254 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E1202 16:07:15.603637  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/functional-031973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-373254 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (52.941601041s)
--- PASS: TestNetworkPlugins/group/flannel/Start (52.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-373254 "pgrep -a kubelet"
I1202 16:07:16.289004  406799 config.go:182] Loaded profile config "custom-flannel-373254": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-373254 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-q9nft" [c210ae7a-681d-4369-82f0-dab9816af7ea] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-q9nft" [c210ae7a-681d-4369-82f0-dab9816af7ea] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004190805s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-373254 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-373254 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-373254 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (66.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-373254 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-373254 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m6.006630402s)
--- PASS: TestNetworkPlugins/group/bridge/Start (66.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (53.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-157184 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-157184 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (53.39940804s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (53.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-cd5px" [7ba9aab4-e4fd-409b-a020-5689d91ab836] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00413896s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-373254 "pgrep -a kubelet"
I1202 16:08:02.774103  406799 config.go:182] Loaded profile config "enable-default-cni-373254": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-373254 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-d99s7" [aab1510b-ee26-4443-bceb-b43f1e3023ae] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-d99s7" [aab1510b-ee26-4443-bceb-b43f1e3023ae] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.004867759s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-373254 "pgrep -a kubelet"
I1202 16:08:08.592860  406799 config.go:182] Loaded profile config "flannel-373254": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-373254 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-drgp8" [1b7c8593-8772-458b-9eae-b4dd33584f27] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-drgp8" [1b7c8593-8772-458b-9eae-b4dd33584f27] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.004842268s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-373254 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-373254 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-373254 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-373254 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-373254 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-373254 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (51.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-756109 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-756109 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: (51.029566819s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (51.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (42.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-531540 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-531540 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (42.721393479s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (42.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (12.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-157184 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [661b68d2-d2ac-492e-a8d8-40ca8fb3aea2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [661b68d2-d2ac-492e-a8d8-40ca8fb3aea2] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 12.003864604s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-157184 exec busybox -- /bin/sh -c "ulimit -n"
I1202 16:08:54.983474  406799 config.go:182] Loaded profile config "bridge-373254": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (12.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-373254 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-373254 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-ndjj9" [35fbbb81-c57c-466f-840d-859b28644763] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-ndjj9" [35fbbb81-c57c-466f-840d-859b28644763] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004827007s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-157184 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-157184 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-157184 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-157184 --alsologtostderr -v=3: (12.503703956s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-373254 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-373254 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-373254 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-157184 -n old-k8s-version-157184
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-157184 -n old-k8s-version-157184: exit status 7 (119.183371ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-157184 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (47.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-157184 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-157184 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (47.297762257s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-157184 -n old-k8s-version-157184
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (47.69s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-531540 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [1070d009-c1ae-43fb-9134-a181a5190509] Pending
E1202 16:09:23.348610  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/functional-748804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [1070d009-c1ae-43fb-9134-a181a5190509] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [1070d009-c1ae-43fb-9134-a181a5190509] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004730345s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-531540 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-756109 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [662f8174-b6e3-4ae4-926c-97f5f4dc195c] Pending
helpers_test.go:352: "busybox" [662f8174-b6e3-4ae4-926c-97f5f4dc195c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [662f8174-b6e3-4ae4-926c-97f5f4dc195c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004205975s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-756109 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (39.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-588189 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-588189 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (39.064190033s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (39.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-531540 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-531540 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.335111228s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-531540 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-531540 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-531540 --alsologtostderr -v=3: (12.140651518s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-756109 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-756109 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.0387622s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-756109 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-756109 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-756109 --alsologtostderr -v=3: (12.232914462s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-531540 -n embed-certs-531540
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-531540 -n embed-certs-531540: exit status 7 (105.669269ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-531540 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (49.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-531540 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-531540 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (49.57367125s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-531540 -n embed-certs-531540
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (49.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-756109 -n no-preload-756109
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-756109 -n no-preload-756109: exit status 7 (94.463409ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-756109 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (45.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-756109 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-756109 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: (44.881528583s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-756109 -n no-preload-756109
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (45.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-pj6pc" [2d25f67a-3932-49c6-84c7-6aae06072be3] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004158782s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-pj6pc" [2d25f67a-3932-49c6-84c7-6aae06072be3] Running
E1202 16:10:03.143191  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/auto-373254/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:10:03.149575  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/auto-373254/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:10:03.161033  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/auto-373254/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:10:03.182505  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/auto-373254/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:10:03.224107  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/auto-373254/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:10:03.305774  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/auto-373254/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:10:03.467942  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/auto-373254/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:10:03.790253  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/auto-373254/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:10:04.431953  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/auto-373254/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:10:05.713645  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/auto-373254/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004017072s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-157184 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-588189 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [26db6129-75d0-4dba-b409-214049eb24df] Pending
helpers_test.go:352: "busybox" [26db6129-75d0-4dba-b409-214049eb24df] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [26db6129-75d0-4dba-b409-214049eb24df] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.00471249s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-588189 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-157184 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-157184 --alsologtostderr -v=1
E1202 16:10:08.275649  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/auto-373254/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-157184 -n old-k8s-version-157184
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-157184 -n old-k8s-version-157184: exit status 2 (344.908685ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-157184 -n old-k8s-version-157184
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-157184 -n old-k8s-version-157184: exit status 2 (339.243483ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-157184 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-157184 -n old-k8s-version-157184
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-157184 -n old-k8s-version-157184
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.94s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (34.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-662756 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-662756 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: (34.075127902s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (34.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-588189 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-588189 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-588189 --alsologtostderr -v=3
E1202 16:10:23.638718  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/auto-373254/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-588189 --alsologtostderr -v=3: (12.714283283s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-588189 -n default-k8s-diff-port-588189
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-588189 -n default-k8s-diff-port-588189: exit status 7 (109.266924ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-588189 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (47.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-588189 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-588189 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (47.226566588s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-588189 -n default-k8s-diff-port-588189
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (47.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-b84665fb8-w5wc7" [7ba5fe4e-fe0b-491b-805d-ba27053ec9f9] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005309015s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-kzlfr" [f416732f-1e05-4166-9f7a-17cae92bb47a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003708079s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-b84665fb8-w5wc7" [7ba5fe4e-fe0b-491b-805d-ba27053ec9f9] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003526424s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-756109 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-kzlfr" [f416732f-1e05-4166-9f7a-17cae92bb47a] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00416272s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-531540 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-756109 image list --format=json
E1202 16:10:44.120030  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/auto-373254/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.59s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-756109 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-756109 -n no-preload-756109
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-756109 -n no-preload-756109: exit status 2 (366.702372ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-756109 -n no-preload-756109
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-756109 -n no-preload-756109: exit status 2 (372.129402ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-756109 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-756109 -n no-preload-756109
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-756109 -n no-preload-756109
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.59s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-531540 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.85s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-531540 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-531540 --alsologtostderr -v=1: (1.235893537s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-531540 -n embed-certs-531540
E1202 16:10:48.314855  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/kindnet-373254/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:10:48.636892  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/kindnet-373254/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-531540 -n embed-certs-531540: exit status 2 (444.705634ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-531540 -n embed-certs-531540
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-531540 -n embed-certs-531540: exit status 2 (434.381194ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-531540 --alsologtostderr -v=1
E1202 16:10:49.279349  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/kindnet-373254/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-531540 -n embed-certs-531540
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-531540 -n embed-certs-531540
E1202 16:10:50.560764  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/kindnet-373254/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.85s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.62s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-662756 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1202 16:10:47.988805  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/kindnet-373254/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:10:47.996191  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/kindnet-373254/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:10:48.007731  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/kindnet-373254/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:10:48.029583  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/kindnet-373254/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:10:48.071341  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/kindnet-373254/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:10:48.152905  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/kindnet-373254/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-662756 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.624459345s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.62s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-662756 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-662756 --alsologtostderr -v=3: (2.195510782s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-662756 -n newest-cni-662756
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-662756 -n newest-cni-662756: exit status 7 (91.051749ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-662756 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (10.58s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-662756 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1202 16:10:53.122158  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/kindnet-373254/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-662756 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: (10.217949954s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-662756 -n newest-cni-662756
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (10.58s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-662756 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-662756 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-662756 -n newest-cni-662756
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-662756 -n newest-cni-662756: exit status 2 (328.804464ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-662756 -n newest-cni-662756
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-662756 -n newest-cni-662756: exit status 2 (337.739718ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-662756 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-662756 -n newest-cni-662756
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-662756 -n newest-cni-662756
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.76s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-qw7mk" [60a9035c-3593-4740-bb95-b36d0ff1e88c] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004457602s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-qw7mk" [60a9035c-3593-4740-bb95-b36d0ff1e88c] Running
E1202 16:11:25.062583  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/calico-373254/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:11:25.069107  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/calico-373254/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:11:25.080652  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/calico-373254/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:11:25.081861  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/auto-373254/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:11:25.102419  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/calico-373254/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:11:25.143919  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/calico-373254/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:11:25.225659  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/calico-373254/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:11:25.387285  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/calico-373254/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:11:25.709464  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/calico-373254/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:11:26.351787  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/calico-373254/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:11:27.633947  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/calico-373254/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004820917s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-588189 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-588189 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-588189 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-588189 -n default-k8s-diff-port-588189
E1202 16:11:28.968316  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/kindnet-373254/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-588189 -n default-k8s-diff-port-588189: exit status 2 (340.20157ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-588189 -n default-k8s-diff-port-588189
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-588189 -n default-k8s-diff-port-588189: exit status 2 (338.760026ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-588189 --alsologtostderr -v=1
E1202 16:11:30.195457  406799 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/calico-373254/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-588189 -n default-k8s-diff-port-588189
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-588189 -n default-k8s-diff-port-588189
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.90s)

                                                
                                    

Test skip (32/419)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0.15
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
60 TestDockerFlags 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
140 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
141 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
142 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
261 TestGvisorAddon 0
283 TestImageBuild 0
284 TestISOImage 0
348 TestChangeNoneUser 0
351 TestScheduledStopWindows 0
353 TestSkaffold 0
384 TestNetworkPlugins/group/kubenet 3.6
392 TestNetworkPlugins/group/cilium 3.99
398 TestStartStop/group/disable-driver-mounts 0.18
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1202 15:09:47.593706  406799 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
W1202 15:09:47.721279  406799 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-amd64.tar.lz4 status code: 404
W1202 15:09:47.747489  406799 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-amd64.tar.lz4 status code: 404
aaa_download_only_test.go:113: No preload image
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-373254 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-373254

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-373254

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-373254

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-373254

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-373254

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-373254

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-373254

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-373254

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-373254

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-373254

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373254"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373254"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373254"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-373254

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373254"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373254"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-373254" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-373254" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-373254" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-373254" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-373254" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-373254" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-373254" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-373254" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373254"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373254"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373254"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373254"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373254"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-373254" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-373254" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-373254" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373254"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373254"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373254"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373254"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373254"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22021-403182/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 02 Dec 2025 16:01:11 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: cert-expiration-016294
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22021-403182/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 02 Dec 2025 16:03:11 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-642005
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22021-403182/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 02 Dec 2025 16:02:50 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: stopped-upgrade-792276
contexts:
- context:
cluster: cert-expiration-016294
extensions:
- extension:
last-update: Tue, 02 Dec 2025 16:01:11 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-016294
name: cert-expiration-016294
- context:
cluster: kubernetes-upgrade-642005
user: kubernetes-upgrade-642005
name: kubernetes-upgrade-642005
- context:
cluster: stopped-upgrade-792276
user: stopped-upgrade-792276
name: stopped-upgrade-792276
current-context: ""
kind: Config
users:
- name: cert-expiration-016294
user:
client-certificate: /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/cert-expiration-016294/client.crt
client-key: /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/cert-expiration-016294/client.key
- name: kubernetes-upgrade-642005
user:
client-certificate: /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/kubernetes-upgrade-642005/client.crt
client-key: /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/kubernetes-upgrade-642005/client.key
- name: stopped-upgrade-792276
user:
client-certificate: /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/stopped-upgrade-792276/client.crt
client-key: /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/stopped-upgrade-792276/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-373254

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373254"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373254"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373254"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373254"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373254"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373254"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373254"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373254"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373254"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373254"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373254"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373254"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373254"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373254"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373254"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373254"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373254"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-373254"

                                                
                                                
----------------------- debugLogs end: kubenet-373254 [took: 3.423058822s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-373254" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-373254
--- SKIP: TestNetworkPlugins/group/kubenet (3.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-373254 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-373254

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-373254

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-373254

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-373254

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-373254

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-373254

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-373254

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-373254

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-373254

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-373254

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373254"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373254"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373254"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-373254

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373254"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373254"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-373254" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-373254" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-373254" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-373254" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-373254" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-373254" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-373254" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-373254" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373254"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373254"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373254"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373254"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373254"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-373254

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-373254

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-373254" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-373254" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-373254

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-373254

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-373254" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-373254" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-373254" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-373254" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-373254" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373254"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373254"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373254"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373254"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373254"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22021-403182/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 02 Dec 2025 16:01:11 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: cert-expiration-016294
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22021-403182/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 02 Dec 2025 16:03:11 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-642005
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22021-403182/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 02 Dec 2025 16:02:50 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: stopped-upgrade-792276
contexts:
- context:
cluster: cert-expiration-016294
extensions:
- extension:
last-update: Tue, 02 Dec 2025 16:01:11 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-016294
name: cert-expiration-016294
- context:
cluster: kubernetes-upgrade-642005
user: kubernetes-upgrade-642005
name: kubernetes-upgrade-642005
- context:
cluster: stopped-upgrade-792276
user: stopped-upgrade-792276
name: stopped-upgrade-792276
current-context: ""
kind: Config
users:
- name: cert-expiration-016294
user:
client-certificate: /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/cert-expiration-016294/client.crt
client-key: /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/cert-expiration-016294/client.key
- name: kubernetes-upgrade-642005
user:
client-certificate: /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/kubernetes-upgrade-642005/client.crt
client-key: /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/kubernetes-upgrade-642005/client.key
- name: stopped-upgrade-792276
user:
client-certificate: /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/stopped-upgrade-792276/client.crt
client-key: /home/jenkins/minikube-integration/22021-403182/.minikube/profiles/stopped-upgrade-792276/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-373254

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373254"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373254"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373254"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373254"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373254"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373254"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373254"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373254"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373254"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373254"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373254"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373254"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373254"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373254"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373254"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373254"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373254"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-373254" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-373254"

                                                
                                                
----------------------- debugLogs end: cilium-373254 [took: 3.814309362s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-373254" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-373254
--- SKIP: TestNetworkPlugins/group/cilium (3.99s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-172386" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-172386
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
Copied to clipboard