Test Report: Docker_Linux_containerd 21643

                    
                      cc42fd2f8cec8fa883ff6f7397a2f6141c487062:2025-10-02:41725
                    
                

Test fail (9/331)

x
+
TestFunctional/parallel/DashboardCmd (302.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-199910 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-199910 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-199910 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-199910 --alsologtostderr -v=1] stderr:
I1002 06:19:15.450207  429962 out.go:360] Setting OutFile to fd 1 ...
I1002 06:19:15.450448  429962 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 06:19:15.450458  429962 out.go:374] Setting ErrFile to fd 2...
I1002 06:19:15.450462  429962 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 06:19:15.450654  429962 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-375701/.minikube/bin
I1002 06:19:15.450950  429962 mustload.go:65] Loading cluster: functional-199910
I1002 06:19:15.451298  429962 config.go:182] Loaded profile config "functional-199910": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1002 06:19:15.451656  429962 cli_runner.go:164] Run: docker container inspect functional-199910 --format={{.State.Status}}
I1002 06:19:15.469123  429962 host.go:66] Checking if "functional-199910" exists ...
I1002 06:19:15.469371  429962 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1002 06:19:15.523116  429962 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 06:19:15.513261468 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1002 06:19:15.523259  429962 api_server.go:166] Checking apiserver status ...
I1002 06:19:15.523322  429962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1002 06:19:15.523373  429962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-199910
I1002 06:19:15.541031  429962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/21643-375701/.minikube/machines/functional-199910/id_rsa Username:docker}
I1002 06:19:15.645409  429962 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5165/cgroup
W1002 06:19:15.653066  429962 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5165/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1002 06:19:15.653129  429962 ssh_runner.go:195] Run: ls
I1002 06:19:15.656499  429962 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I1002 06:19:15.661410  429962 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W1002 06:19:15.661450  429962 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1002 06:19:15.661597  429962 config.go:182] Loaded profile config "functional-199910": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1002 06:19:15.661607  429962 addons.go:69] Setting dashboard=true in profile "functional-199910"
I1002 06:19:15.661614  429962 addons.go:238] Setting addon dashboard=true in "functional-199910"
I1002 06:19:15.661636  429962 host.go:66] Checking if "functional-199910" exists ...
I1002 06:19:15.662037  429962 cli_runner.go:164] Run: docker container inspect functional-199910 --format={{.State.Status}}
I1002 06:19:15.680443  429962 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1002 06:19:15.681553  429962 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1002 06:19:15.682403  429962 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1002 06:19:15.682418  429962 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1002 06:19:15.682466  429962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-199910
I1002 06:19:15.698455  429962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/21643-375701/.minikube/machines/functional-199910/id_rsa Username:docker}
I1002 06:19:15.802277  429962 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1002 06:19:15.802300  429962 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1002 06:19:15.815119  429962 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1002 06:19:15.815138  429962 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1002 06:19:15.826618  429962 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1002 06:19:15.826636  429962 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1002 06:19:15.838594  429962 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1002 06:19:15.838613  429962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1002 06:19:15.850339  429962 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I1002 06:19:15.850356  429962 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1002 06:19:15.862313  429962 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1002 06:19:15.862332  429962 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1002 06:19:15.873961  429962 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1002 06:19:15.873981  429962 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1002 06:19:15.886281  429962 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1002 06:19:15.886298  429962 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1002 06:19:15.898021  429962 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1002 06:19:15.898038  429962 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1002 06:19:15.910079  429962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1002 06:19:16.305455  429962 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-199910 addons enable metrics-server

                                                
                                                
I1002 06:19:16.306557  429962 addons.go:201] Writing out "functional-199910" config to set dashboard=true...
W1002 06:19:16.306991  429962 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1002 06:19:16.307913  429962 kapi.go:59] client config for functional-199910: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21643-375701/.minikube/profiles/functional-199910/client.crt", KeyFile:"/home/jenkins/minikube-integration/21643-375701/.minikube/profiles/functional-199910/client.key", CAFile:"/home/jenkins/minikube-integration/21643-375701/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1002 06:19:16.308426  429962 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1002 06:19:16.308445  429962 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1002 06:19:16.308449  429962 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1002 06:19:16.308454  429962 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1002 06:19:16.308465  429962 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1002 06:19:16.315681  429962 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  80a050fd-cfcb-416a-9f55-860f40ed678f 1247 0 2025-10-02 06:19:16 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-10-02 06:19:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.101.174.148,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.101.174.148],IPFamilies:[IPv4],AllocateLoadBalan
cerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1002 06:19:16.315820  429962 out.go:285] * Launching proxy ...
* Launching proxy ...
I1002 06:19:16.315876  429962 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-199910 proxy --port 36195]
I1002 06:19:16.316138  429962 dashboard.go:157] Waiting for kubectl to output host:port ...
I1002 06:19:16.360000  429962 dashboard.go:175] proxy stdout: Starting to serve on 127.0.0.1:36195
W1002 06:19:16.360074  429962 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I1002 06:19:16.367982  429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6262cc89-5d16-4e24-9e3b-55735c7b711a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:16 GMT]] Body:0xc0005e40c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002b6000 TLS:<nil>}
I1002 06:19:16.368059  429962 retry.go:31] will retry after 88.676µs: Temporary Error: unexpected response code: 503
I1002 06:19:16.371392  429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9a25b702-86a7-4e0a-b9ed-1020b3f97e31] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:16 GMT]] Body:0xc0009a72c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000206dc0 TLS:<nil>}
I1002 06:19:16.371453  429962 retry.go:31] will retry after 80.21µs: Temporary Error: unexpected response code: 503
I1002 06:19:16.374577  429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0f892c5b-3a88-49ff-ac34-c6a38c69f441] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:16 GMT]] Body:0xc0005e4240 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002b7e00 TLS:<nil>}
I1002 06:19:16.374620  429962 retry.go:31] will retry after 292.049µs: Temporary Error: unexpected response code: 503
I1002 06:19:16.377540  429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e84e803e-87a9-4446-9deb-b219453d9a1d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:16 GMT]] Body:0xc0009a73c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000206f00 TLS:<nil>}
I1002 06:19:16.377581  429962 retry.go:31] will retry after 443.208µs: Temporary Error: unexpected response code: 503
I1002 06:19:16.380646  429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0839b483-f57f-4acf-a160-8de6408ab2f2] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:16 GMT]] Body:0xc0005e4380 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bf2c0 TLS:<nil>}
I1002 06:19:16.380709  429962 retry.go:31] will retry after 264.305µs: Temporary Error: unexpected response code: 503
I1002 06:19:16.383698  429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0b01bf94-77e2-41d1-b969-a7d28cd284dc] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:16 GMT]] Body:0xc00085c0c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207040 TLS:<nil>}
I1002 06:19:16.383745  429962 retry.go:31] will retry after 609.486µs: Temporary Error: unexpected response code: 503
I1002 06:19:16.386595  429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[db88100e-afd4-4dab-8b6d-43adab3ab9c2] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:16 GMT]] Body:0xc00085c180 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001748000 TLS:<nil>}
I1002 06:19:16.386629  429962 retry.go:31] will retry after 933.507µs: Temporary Error: unexpected response code: 503
I1002 06:19:16.389443  429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[acfe5203-362c-45e1-b82a-d5d08357e7ee] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:16 GMT]] Body:0xc0009a74c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001748140 TLS:<nil>}
I1002 06:19:16.389489  429962 retry.go:31] will retry after 1.743964ms: Temporary Error: unexpected response code: 503
I1002 06:19:16.393315  429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[fbf99f44-a866-4166-9d48-fd7f2b2b369c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:16 GMT]] Body:0xc00085c280 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bf400 TLS:<nil>}
I1002 06:19:16.393348  429962 retry.go:31] will retry after 2.937946ms: Temporary Error: unexpected response code: 503
I1002 06:19:16.398171  429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[82d7842b-532a-444e-86e1-a7c1308dbedd] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:16 GMT]] Body:0xc0005e4540 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001748280 TLS:<nil>}
I1002 06:19:16.398209  429962 retry.go:31] will retry after 3.876728ms: Temporary Error: unexpected response code: 503
I1002 06:19:16.404132  429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0bfdb4db-d82c-488c-95ea-8ab236f4054c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:16 GMT]] Body:0xc00085c380 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207180 TLS:<nil>}
I1002 06:19:16.404165  429962 retry.go:31] will retry after 7.813323ms: Temporary Error: unexpected response code: 503
I1002 06:19:16.414068  429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[dd84c578-f5d5-4ef2-80bd-938f0f2a4214] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:16 GMT]] Body:0xc0009a75c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017483c0 TLS:<nil>}
I1002 06:19:16.414100  429962 retry.go:31] will retry after 8.742475ms: Temporary Error: unexpected response code: 503
I1002 06:19:16.424960  429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[549b24b5-e0a9-41db-b5bf-5c9c4d269a20] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:16 GMT]] Body:0xc0005e4680 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bf540 TLS:<nil>}
I1002 06:19:16.424998  429962 retry.go:31] will retry after 13.582393ms: Temporary Error: unexpected response code: 503
I1002 06:19:16.441317  429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[59385281-0c0e-43af-9ed8-975e923f5663] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:16 GMT]] Body:0xc00085c480 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002072c0 TLS:<nil>}
I1002 06:19:16.441381  429962 retry.go:31] will retry after 13.5332ms: Temporary Error: unexpected response code: 503
I1002 06:19:16.457362  429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a30b0c1d-160e-4d18-b94d-843537cd6f6f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:16 GMT]] Body:0xc00085c540 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001748500 TLS:<nil>}
I1002 06:19:16.457407  429962 retry.go:31] will retry after 24.619545ms: Temporary Error: unexpected response code: 503
I1002 06:19:16.484775  429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[53233ac6-ca8f-44dd-b4bd-33e33c7e164d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:16 GMT]] Body:0xc00085c5c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001748640 TLS:<nil>}
I1002 06:19:16.484827  429962 retry.go:31] will retry after 46.588127ms: Temporary Error: unexpected response code: 503
I1002 06:19:16.534009  429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7db8b8d5-faed-4790-a421-59dfc7c48f09] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:16 GMT]] Body:0xc0005e4880 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001748780 TLS:<nil>}
I1002 06:19:16.534048  429962 retry.go:31] will retry after 65.831904ms: Temporary Error: unexpected response code: 503
I1002 06:19:16.603204  429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a55b26c7-59e9-4309-b130-c955ad38f2f4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:16 GMT]] Body:0xc0009a7700 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207400 TLS:<nil>}
I1002 06:19:16.603268  429962 retry.go:31] will retry after 85.970958ms: Temporary Error: unexpected response code: 503
I1002 06:19:16.693189  429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[fe479821-1511-492f-bb5b-ac594ba24979] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:16 GMT]] Body:0xc0005e49c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bf680 TLS:<nil>}
I1002 06:19:16.693248  429962 retry.go:31] will retry after 139.090478ms: Temporary Error: unexpected response code: 503
I1002 06:19:16.834790  429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7a341ed7-ef09-459f-8819-4a0197ce2e91] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:16 GMT]] Body:0xc00085c800 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207540 TLS:<nil>}
I1002 06:19:16.834849  429962 retry.go:31] will retry after 139.796734ms: Temporary Error: unexpected response code: 503
I1002 06:19:16.977548  429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[37947537-9868-4cac-b9b0-1c29ee115bf0] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:16 GMT]] Body:0xc0009a7780 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017488c0 TLS:<nil>}
I1002 06:19:16.977626  429962 retry.go:31] will retry after 363.743668ms: Temporary Error: unexpected response code: 503
I1002 06:19:17.344639  429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f40fd6f2-4f2c-4896-9fa5-f61d1aeb419d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:17 GMT]] Body:0xc00085dc80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bf7c0 TLS:<nil>}
I1002 06:19:17.344696  429962 retry.go:31] will retry after 741.788917ms: Temporary Error: unexpected response code: 503
I1002 06:19:18.090966  429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7392d6df-9f81-4eac-92d1-874e2f52fdf9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:18 GMT]] Body:0xc0009a7880 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001748a00 TLS:<nil>}
I1002 06:19:18.091037  429962 retry.go:31] will retry after 997.833398ms: Temporary Error: unexpected response code: 503
I1002 06:19:19.091605  429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[54fb561a-b973-4325-94c7-99e80f437e50] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:19 GMT]] Body:0xc0007d80c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bf900 TLS:<nil>}
I1002 06:19:19.091665  429962 retry.go:31] will retry after 932.61279ms: Temporary Error: unexpected response code: 503
I1002 06:19:20.027117  429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[dca262d3-c5ca-47f1-bc28-8ded2c78b8ce] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:20 GMT]] Body:0xc0009a7940 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001748b40 TLS:<nil>}
I1002 06:19:20.027191  429962 retry.go:31] will retry after 1.794435325s: Temporary Error: unexpected response code: 503
I1002 06:19:21.825634  429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[662335d0-93ab-44fb-9f95-d946e69c1eef] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:21 GMT]] Body:0xc0005e4b80 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bfe00 TLS:<nil>}
I1002 06:19:21.825730  429962 retry.go:31] will retry after 1.776278189s: Temporary Error: unexpected response code: 503
I1002 06:19:23.605617  429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0f2c932b-b3dd-4090-87e0-f6ece8759852] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:23 GMT]] Body:0xc0005e4c40 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207680 TLS:<nil>}
I1002 06:19:23.605686  429962 retry.go:31] will retry after 4.916942492s: Temporary Error: unexpected response code: 503
I1002 06:19:28.526451  429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[aabc63ac-1649-4e85-802f-e1d3a60e1c21] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:28 GMT]] Body:0xc0005e4d00 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017c0000 TLS:<nil>}
I1002 06:19:28.526512  429962 retry.go:31] will retry after 5.906031757s: Temporary Error: unexpected response code: 503
I1002 06:19:34.438460  429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[97984b78-4542-40fe-a65e-d0d54d92c530] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:34 GMT]] Body:0xc0007d8240 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207900 TLS:<nil>}
I1002 06:19:34.438525  429962 retry.go:31] will retry after 10.752311502s: Temporary Error: unexpected response code: 503
I1002 06:19:45.197681  429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c90e805b-2852-4289-9764-f1df91e73efe] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:45 GMT]] Body:0xc0009a7a80 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207b80 TLS:<nil>}
I1002 06:19:45.197753  429962 retry.go:31] will retry after 12.873006097s: Temporary Error: unexpected response code: 503
I1002 06:19:58.073334  429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9da17a89-b1af-42a6-baa4-a350a23f5a4f] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:19:58 GMT]] Body:0xc0009a7b00 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207cc0 TLS:<nil>}
I1002 06:19:58.073405  429962 retry.go:31] will retry after 14.544782249s: Temporary Error: unexpected response code: 503
I1002 06:20:12.621959  429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[49994a32-a939-4fce-9bf0-b138d4359d5b] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:20:12 GMT]] Body:0xc0009a7b80 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001748c80 TLS:<nil>}
I1002 06:20:12.622020  429962 retry.go:31] will retry after 24.734520816s: Temporary Error: unexpected response code: 503
I1002 06:20:37.359795  429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7827eba0-ad1d-4c34-813a-908d40b6d189] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:20:37 GMT]] Body:0xc0007d8400 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017c0140 TLS:<nil>}
I1002 06:20:37.359877  429962 retry.go:31] will retry after 58.784061825s: Temporary Error: unexpected response code: 503
I1002 06:21:36.148057  429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[88d29ef7-2993-480f-853b-8ac11661da7b] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:21:36 GMT]] Body:0xc000892d40 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000206280 TLS:<nil>}
I1002 06:21:36.148134  429962 retry.go:31] will retry after 1m29.114766521s: Temporary Error: unexpected response code: 503
I1002 06:23:05.270261  429962 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[74559056-0757-4ffe-b5b8-2acd525387a7] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 06:23:05 GMT]] Body:0xc0005e4200 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000316000 TLS:<nil>}
I1002 06:23:05.270360  429962 retry.go:31] will retry after 1m15.519794491s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-199910
helpers_test.go:243: (dbg) docker inspect functional-199910:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a129060d7e93b21b338c2aa00a766f151b1698e9fecb1ffe2d82b4bab89daded",
	        "Created": "2025-10-02T06:11:55.541637226Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 411620,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T06:11:55.570474432Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/a129060d7e93b21b338c2aa00a766f151b1698e9fecb1ffe2d82b4bab89daded/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a129060d7e93b21b338c2aa00a766f151b1698e9fecb1ffe2d82b4bab89daded/hostname",
	        "HostsPath": "/var/lib/docker/containers/a129060d7e93b21b338c2aa00a766f151b1698e9fecb1ffe2d82b4bab89daded/hosts",
	        "LogPath": "/var/lib/docker/containers/a129060d7e93b21b338c2aa00a766f151b1698e9fecb1ffe2d82b4bab89daded/a129060d7e93b21b338c2aa00a766f151b1698e9fecb1ffe2d82b4bab89daded-json.log",
	        "Name": "/functional-199910",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-199910:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-199910",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a129060d7e93b21b338c2aa00a766f151b1698e9fecb1ffe2d82b4bab89daded",
	                "LowerDir": "/var/lib/docker/overlay2/eb19350948b954d3f3cef2ec88b1b9dcc1bb8c2cbcc956abedd944186369d718-init/diff:/var/lib/docker/overlay2/298df2ba9683a73d350c1b6c983da9f2b87e35cf844050b5b24d44ff0e84e14d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/eb19350948b954d3f3cef2ec88b1b9dcc1bb8c2cbcc956abedd944186369d718/merged",
	                "UpperDir": "/var/lib/docker/overlay2/eb19350948b954d3f3cef2ec88b1b9dcc1bb8c2cbcc956abedd944186369d718/diff",
	                "WorkDir": "/var/lib/docker/overlay2/eb19350948b954d3f3cef2ec88b1b9dcc1bb8c2cbcc956abedd944186369d718/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-199910",
	                "Source": "/var/lib/docker/volumes/functional-199910/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-199910",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-199910",
	                "name.minikube.sigs.k8s.io": "functional-199910",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f968325eef651f67db13113c73a3310ee76a7c88af5a211cc222343e85ee43d1",
	            "SandboxKey": "/var/run/docker/netns/f968325eef65",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33159"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33160"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33163"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33161"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-199910": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:65:7a:15:4b:c7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7d66feb0971ca31aa50fbd8d10400dca354f44739c3efb8d06e897cb43ffc6b4",
	                    "EndpointID": "50a6d2ecb2dfd2667a6e29fd9b2eea174bfafa2a4794ff1c52dc85ca797a6a00",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-199910",
	                        "a129060d7e93"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-199910 -n functional-199910
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-199910 logs -n 25: (1.15124341s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                        ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount          │ -p functional-199910 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1912355386/001:/mount1 --alsologtostderr -v=1 │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │                     │
	│ mount          │ -p functional-199910 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1912355386/001:/mount2 --alsologtostderr -v=1 │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │                     │
	│ ssh            │ functional-199910 ssh findmnt -T /mount1                                                                           │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │ 02 Oct 25 06:19 UTC │
	│ ssh            │ functional-199910 ssh findmnt -T /mount2                                                                           │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │ 02 Oct 25 06:19 UTC │
	│ ssh            │ functional-199910 ssh findmnt -T /mount3                                                                           │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │ 02 Oct 25 06:19 UTC │
	│ mount          │ -p functional-199910 --kill=true                                                                                   │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │                     │
	│ start          │ -p functional-199910 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd    │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │                     │
	│ start          │ -p functional-199910 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd    │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │                     │
	│ start          │ -p functional-199910 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd              │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-199910 --alsologtostderr -v=1                                                     │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │                     │
	│ update-context │ functional-199910 update-context --alsologtostderr -v=2                                                            │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │ 02 Oct 25 06:19 UTC │
	│ update-context │ functional-199910 update-context --alsologtostderr -v=2                                                            │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │ 02 Oct 25 06:19 UTC │
	│ update-context │ functional-199910 update-context --alsologtostderr -v=2                                                            │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │ 02 Oct 25 06:19 UTC │
	│ image          │ functional-199910 image ls --format short --alsologtostderr                                                        │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │ 02 Oct 25 06:19 UTC │
	│ image          │ functional-199910 image ls --format yaml --alsologtostderr                                                         │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │ 02 Oct 25 06:19 UTC │
	│ ssh            │ functional-199910 ssh pgrep buildkitd                                                                              │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │                     │
	│ image          │ functional-199910 image build -t localhost/my-image:functional-199910 testdata/build --alsologtostderr             │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │ 02 Oct 25 06:19 UTC │
	│ image          │ functional-199910 image ls                                                                                         │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │ 02 Oct 25 06:19 UTC │
	│ image          │ functional-199910 image ls --format json --alsologtostderr                                                         │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │ 02 Oct 25 06:19 UTC │
	│ image          │ functional-199910 image ls --format table --alsologtostderr                                                        │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │ 02 Oct 25 06:19 UTC │
	│ service        │ functional-199910 service list                                                                                     │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:23 UTC │ 02 Oct 25 06:23 UTC │
	│ service        │ functional-199910 service list -o json                                                                             │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:23 UTC │ 02 Oct 25 06:23 UTC │
	│ service        │ functional-199910 service --namespace=default --https --url hello-node                                             │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:23 UTC │                     │
	│ service        │ functional-199910 service hello-node --url --format={{.IP}}                                                        │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:23 UTC │                     │
	│ service        │ functional-199910 service hello-node --url                                                                         │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:23 UTC │                     │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 06:19:15
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 06:19:15.254520  429832 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:19:15.254770  429832 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:19:15.254780  429832 out.go:374] Setting ErrFile to fd 2...
	I1002 06:19:15.254792  429832 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:19:15.255023  429832 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-375701/.minikube/bin
	I1002 06:19:15.255645  429832 out.go:368] Setting JSON to false
	I1002 06:19:15.256783  429832 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":7298,"bootTime":1759378657,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 06:19:15.256873  429832 start.go:140] virtualization: kvm guest
	I1002 06:19:15.258505  429832 out.go:179] * [functional-199910] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 06:19:15.259591  429832 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 06:19:15.259597  429832 notify.go:220] Checking for updates...
	I1002 06:19:15.260831  429832 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 06:19:15.262162  429832 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-375701/kubeconfig
	I1002 06:19:15.263266  429832 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-375701/.minikube
	I1002 06:19:15.264267  429832 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 06:19:15.265202  429832 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 06:19:15.266577  429832 config.go:182] Loaded profile config "functional-199910": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1002 06:19:15.267099  429832 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 06:19:15.289007  429832 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I1002 06:19:15.289098  429832 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:19:15.340141  429832 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 06:19:15.330436749 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:19:15.340263  429832 docker.go:318] overlay module found
	I1002 06:19:15.341738  429832 out.go:179] * Using the docker driver based on existing profile
	I1002 06:19:15.342748  429832 start.go:304] selected driver: docker
	I1002 06:19:15.342764  429832 start.go:924] validating driver "docker" against &{Name:functional-199910 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-199910 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:19:15.342901  429832 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 06:19:15.343027  429832 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:19:15.398873  429832 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 06:19:15.389220623 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:19:15.399597  429832 cni.go:84] Creating CNI manager for ""
	I1002 06:19:15.399659  429832 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1002 06:19:15.399708  429832 start.go:348] cluster config:
	{Name:functional-199910 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-199910 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:19:15.401261  429832 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	30b85d3947fb7       56cc512116c8f       5 minutes ago       Exited              mount-munger              0                   dfc668eebccd9       busybox-mount                               default
	f6fa8a43c86fe       5107333e08a87       10 minutes ago      Running             mysql                     0                   070ec94bcd8f3       mysql-5bb876957f-cvvj2                      default
	c2503d55a98b1       c3994bc696102       11 minutes ago      Running             kube-apiserver            0                   c790b8963b464       kube-apiserver-functional-199910            kube-system
	0826543035037       c80c8dbafe7dd       11 minutes ago      Running             kube-controller-manager   2                   b57c704d4fda4       kube-controller-manager-functional-199910   kube-system
	a46012c8d77f4       c80c8dbafe7dd       11 minutes ago      Exited              kube-controller-manager   1                   b57c704d4fda4       kube-controller-manager-functional-199910   kube-system
	cf1bb9911e32d       7dd6aaa1717ab       11 minutes ago      Running             kube-scheduler            1                   054fb86bca056       kube-scheduler-functional-199910            kube-system
	73ee8e97a7860       5f1f5298c888d       11 minutes ago      Running             etcd                      1                   1a30670caae60       etcd-functional-199910                      kube-system
	fdcf1ac8db1d9       6e38f40d628db       11 minutes ago      Running             storage-provisioner       1                   a5747f1c0a73f       storage-provisioner                         kube-system
	f0bafa2f4b2c3       52546a367cc9e       11 minutes ago      Running             coredns                   1                   1893e74468bc9       coredns-66bc5c9577-lfbdz                    kube-system
	b42bb88d18439       fc25172553d79       11 minutes ago      Running             kube-proxy                1                   1e56f086695d1       kube-proxy-6fsg9                            kube-system
	ddd36a4ac2f9f       409467f978b4a       11 minutes ago      Running             kindnet-cni               1                   3b55bb31d621a       kindnet-nlvlv                               kube-system
	ff9cd2a0d98dc       52546a367cc9e       11 minutes ago      Exited              coredns                   0                   1893e74468bc9       coredns-66bc5c9577-lfbdz                    kube-system
	80952a8c29127       6e38f40d628db       11 minutes ago      Exited              storage-provisioner       0                   a5747f1c0a73f       storage-provisioner                         kube-system
	0e6365f8d4553       409467f978b4a       12 minutes ago      Exited              kindnet-cni               0                   3b55bb31d621a       kindnet-nlvlv                               kube-system
	97a61b088ac75       fc25172553d79       12 minutes ago      Exited              kube-proxy                0                   1e56f086695d1       kube-proxy-6fsg9                            kube-system
	08e26663c7e44       5f1f5298c888d       12 minutes ago      Exited              etcd                      0                   1a30670caae60       etcd-functional-199910                      kube-system
	53950e32492bd       7dd6aaa1717ab       12 minutes ago      Exited              kube-scheduler            0                   054fb86bca056       kube-scheduler-functional-199910            kube-system
	
	
	==> containerd <==
	Oct 02 06:20:04 functional-199910 containerd[3842]: time="2025-10-02T06:20:04.884796936Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Oct 02 06:20:04 functional-199910 containerd[3842]: time="2025-10-02T06:20:04.886503009Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 06:20:05 functional-199910 containerd[3842]: time="2025-10-02T06:20:05.475070954Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 06:20:07 functional-199910 containerd[3842]: time="2025-10-02T06:20:07.106347197Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 06:20:07 functional-199910 containerd[3842]: time="2025-10-02T06:20:07.106419885Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11046"
	Oct 02 06:20:43 functional-199910 containerd[3842]: time="2025-10-02T06:20:43.884544461Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Oct 02 06:20:43 functional-199910 containerd[3842]: time="2025-10-02T06:20:43.886118204Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 06:20:44 functional-199910 containerd[3842]: time="2025-10-02T06:20:44.462408812Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 06:20:46 functional-199910 containerd[3842]: time="2025-10-02T06:20:46.454784488Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 06:20:46 functional-199910 containerd[3842]: time="2025-10-02T06:20:46.454808932Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=12709"
	Oct 02 06:20:56 functional-199910 containerd[3842]: time="2025-10-02T06:20:56.884513245Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Oct 02 06:20:56 functional-199910 containerd[3842]: time="2025-10-02T06:20:56.886275527Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 06:20:57 functional-199910 containerd[3842]: time="2025-10-02T06:20:57.461979996Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 06:20:59 functional-199910 containerd[3842]: time="2025-10-02T06:20:59.089171758Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 06:20:59 functional-199910 containerd[3842]: time="2025-10-02T06:20:59.089223010Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11046"
	Oct 02 06:22:19 functional-199910 containerd[3842]: time="2025-10-02T06:22:19.884785044Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Oct 02 06:22:19 functional-199910 containerd[3842]: time="2025-10-02T06:22:19.886572046Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 06:22:20 functional-199910 containerd[3842]: time="2025-10-02T06:22:20.500547860Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 06:22:22 functional-199910 containerd[3842]: time="2025-10-02T06:22:22.127605836Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 06:22:22 functional-199910 containerd[3842]: time="2025-10-02T06:22:22.127686885Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11015"
	Oct 02 06:22:28 functional-199910 containerd[3842]: time="2025-10-02T06:22:28.884730907Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Oct 02 06:22:28 functional-199910 containerd[3842]: time="2025-10-02T06:22:28.886326946Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 06:22:29 functional-199910 containerd[3842]: time="2025-10-02T06:22:29.462579836Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 06:22:31 functional-199910 containerd[3842]: time="2025-10-02T06:22:31.105354289Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 06:22:31 functional-199910 containerd[3842]: time="2025-10-02T06:22:31.105447128Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11047"
	
	
	==> coredns [f0bafa2f4b2c3011baa87254e1977f39f0be514d931e8c686e86c0aa29d3b6ff] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34140 - 64363 "HINFO IN 6421585372913567829.3122299182694518145. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.036751794s
	
	
	==> coredns [ff9cd2a0d98dc28d87af1e35cad013fc327b6424af6df9d7e63b16213372132f] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58400 - 64231 "HINFO IN 6580085158149520847.7999093773305824839. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.113539547s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-199910
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-199910
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb
	                    minikube.k8s.io/name=functional-199910
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T06_12_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 06:12:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-199910
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 06:24:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 06:20:09 +0000   Thu, 02 Oct 2025 06:12:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 06:20:09 +0000   Thu, 02 Oct 2025 06:12:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 06:20:09 +0000   Thu, 02 Oct 2025 06:12:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 06:20:09 +0000   Thu, 02 Oct 2025 06:12:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-199910
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 26786d65ffe140308153a7ab60e7851e
	  System UUID:                d452a066-3b39-4b2a-bb48-a6d5f3f27351
	  Boot ID:                    928ae711-d7b1-4c1e-8d35-81d1dcf6c7b5
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.28
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-w8zxz                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-6vrx2           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-cvvj2                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-lfbdz                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-functional-199910                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kindnet-nlvlv                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-199910              250m (3%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-functional-199910     200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-6fsg9                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-199910              100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-4mzf5    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-vlp7x         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 11m                kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node functional-199910 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node functional-199910 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node functional-199910 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           12m                node-controller  Node functional-199910 event: Registered Node functional-199910 in Controller
	  Normal  NodeReady                11m                kubelet          Node functional-199910 status is now: NodeReady
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-199910 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-199910 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node functional-199910 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           11m                node-controller  Node functional-199910 event: Registered Node functional-199910 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c2 5d 35 94 2e 01 08 06
	[  +0.058144] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 62 ab 58 d0 fd cd 08 06
	[  +7.548229] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4a fd c4 dd c4 6d 08 06
	[Oct 2 05:45] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f6 b4 f2 37 23 6e 08 06
	[  +8.618588] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8e d9 8c 9f 19 f9 08 06
	[  +0.000520] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 4a fd c4 dd c4 6d 08 06
	[  +0.839544] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 42 98 6a 33 ef 13 08 06
	[ +18.414075] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 16 fd ef 12 40 02 08 06
	[  +0.000360] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 42 98 6a 33 ef 13 08 06
	[  +5.829441] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8a 7c 73 c3 88 96 08 06
	[  +0.000311] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 62 ab 58 d0 fd cd 08 06
	[ +15.373470] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 de db d2 97 bd 08 06
	[  +0.000392] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 b4 f2 37 23 6e 08 06
	
	
	==> etcd [08e26663c7e447c1795a392880992c67b7efa0c467f1cc535f872ec73d63ad38] <==
	{"level":"warn","ts":"2025-10-02T06:12:05.810669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:12:05.816571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:12:05.822847Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:12:05.835996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:12:05.842577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:12:05.849081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:12:05.894403Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57004","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T06:12:50.300585Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-02T06:12:50.300664Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-199910","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-02T06:12:50.300764Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-02T06:12:57.302505Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-02T06:12:57.302730Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T06:12:57.303059Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T06:12:57.303085Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T06:12:57.302839Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-02T06:12:57.303142Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-02T06:12:57.303158Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"error","ts":"2025-10-02T06:12:57.302240Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"warn","ts":"2025-10-02T06:12:57.303485Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T06:12:57.303507Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T06:12:57.303516Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T06:12:57.304975Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-02T06:12:57.305029Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T06:12:57.305061Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-02T06:12:57.305075Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-199910","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [73ee8e97a78602fac85e902429f6351307dfd75d4e432518b658ad78e1e9d2b1] <==
	{"level":"warn","ts":"2025-10-02T06:13:10.328017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.335433Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.342042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.352795Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50930","server-name":"","error":"read tcp 127.0.0.1:2379->127.0.0.1:50930: read: connection reset by peer"}
	{"level":"warn","ts":"2025-10-02T06:13:10.360606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.367410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.373004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.379059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.384978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.391903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.398291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.404172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.410209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.415890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.421667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.427687Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.433534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.448100Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.451132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.456833Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.462331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.511748Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51280","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T06:23:10.057111Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1114}
	{"level":"info","ts":"2025-10-02T06:23:10.075219Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1114,"took":"17.775311ms","hash":1240741800,"current-db-size-bytes":3690496,"current-db-size":"3.7 MB","current-db-size-in-use-bytes":1880064,"current-db-size-in-use":"1.9 MB"}
	{"level":"info","ts":"2025-10-02T06:23:10.075258Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1240741800,"revision":1114,"compact-revision":-1}
	
	
	==> kernel <==
	 06:24:16 up  2:06,  0 user,  load average: 0.08, 0.30, 0.62
	Linux functional-199910 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0e6365f8d4553ed3a43b91cd235f10a301c95e3ea2ce0b800d81621f382c5540] <==
	I1002 06:12:15.314795       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 06:12:15.315055       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1002 06:12:15.315240       1 main.go:148] setting mtu 1500 for CNI 
	I1002 06:12:15.315261       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 06:12:15.315278       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T06:12:15Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 06:12:15.436330       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 06:12:15.436387       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 06:12:15.436773       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 06:12:15.614228       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1002 06:12:15.914278       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 06:12:15.914307       1 metrics.go:72] Registering metrics
	I1002 06:12:15.914403       1 controller.go:711] "Syncing nftables rules"
	I1002 06:12:25.437165       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:12:25.437244       1 main.go:301] handling current node
	I1002 06:12:35.444020       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:12:35.444049       1 main.go:301] handling current node
	I1002 06:12:45.439055       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:12:45.439091       1 main.go:301] handling current node
	
	
	==> kindnet [ddd36a4ac2f9f0eb8d3a5fb2bf60cae5868d293c2b8e98bf9f4f9f13c884ba40] <==
	I1002 06:22:11.334785       1 main.go:301] handling current node
	I1002 06:22:21.334694       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:22:21.334726       1 main.go:301] handling current node
	I1002 06:22:31.340901       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:22:31.340954       1 main.go:301] handling current node
	I1002 06:22:41.334569       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:22:41.334622       1 main.go:301] handling current node
	I1002 06:22:51.332667       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:22:51.332703       1 main.go:301] handling current node
	I1002 06:23:01.331741       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:23:01.331831       1 main.go:301] handling current node
	I1002 06:23:11.333511       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:23:11.333545       1 main.go:301] handling current node
	I1002 06:23:21.335778       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:23:21.335809       1 main.go:301] handling current node
	I1002 06:23:31.340881       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:23:31.340935       1 main.go:301] handling current node
	I1002 06:23:41.339996       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:23:41.340031       1 main.go:301] handling current node
	I1002 06:23:51.332064       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:23:51.332128       1 main.go:301] handling current node
	I1002 06:24:01.340172       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:24:01.340229       1 main.go:301] handling current node
	I1002 06:24:11.340037       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:24:11.340080       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c2503d55a98b117f95312339e0408d46140df26986746e655ee44ca4b17d1543] <==
	I1002 06:13:10.968328       1 cache.go:39] Caches are synced for autoregister controller
	I1002 06:13:10.973719       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 06:13:11.003861       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 06:13:11.864528       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 06:13:12.011502       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W1002 06:13:12.168303       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1002 06:13:12.169465       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 06:13:12.174164       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 06:13:12.718173       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1002 06:13:12.803056       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 06:13:12.845127       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 06:13:12.850799       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 06:13:27.458495       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.96.42.200"}
	I1002 06:13:32.001334       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.106.74.230"}
	I1002 06:13:32.038247       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1002 06:13:34.269456       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.98.97.18"}
	I1002 06:13:39.536669       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.105.16.67"}
	E1002 06:13:47.153716       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:57876: use of closed network connection
	E1002 06:13:48.672727       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:57894: use of closed network connection
	E1002 06:13:50.778146       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:50950: use of closed network connection
	I1002 06:13:50.901947       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.100.248.3"}
	I1002 06:19:16.194630       1 controller.go:667] quota admission added evaluator for: namespaces
	I1002 06:19:16.288521       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.174.148"}
	I1002 06:19:16.299708       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.133.92"}
	I1002 06:23:10.900793       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [0826543035037388cfadc1a20ffaeab10d0bd916e3a71611034dc156659ba3d3] <==
	I1002 06:13:14.311871       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1002 06:13:14.311885       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1002 06:13:14.312075       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1002 06:13:14.313120       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1002 06:13:14.313147       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1002 06:13:14.313172       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1002 06:13:14.313212       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1002 06:13:14.313272       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1002 06:13:14.313932       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1002 06:13:14.313956       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1002 06:13:14.316028       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1002 06:13:14.318325       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1002 06:13:14.318470       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 06:13:14.321892       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1002 06:13:14.323622       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 06:13:14.323638       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 06:13:14.323648       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1002 06:13:14.325834       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1002 06:13:14.331979       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1002 06:19:16.235527       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 06:19:16.239715       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 06:19:16.242312       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 06:19:16.244051       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 06:19:16.245736       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 06:19:16.250431       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [a46012c8d77f4d59e1edfd2706d75b7ed32740ed7840142c8b6a163ad8125ce4] <==
	I1002 06:12:58.441929       1 serving.go:386] Generated self-signed cert in-memory
	I1002 06:12:59.039518       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1002 06:12:59.039550       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 06:12:59.041673       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1002 06:12:59.041844       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1002 06:12:59.046004       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1002 06:12:59.046753       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 06:12:59.057653       1 controllermanager.go:781] "Started controller" controller="serviceaccount-token-controller"
	I1002 06:12:59.057945       1 shared_informer.go:349] "Waiting for caches to sync" controller="tokens"
	I1002 06:13:00.446798       1 controllermanager.go:781] "Started controller" controller="serviceaccount-controller"
	I1002 06:13:00.446836       1 serviceaccounts_controller.go:114] "Starting service account controller" logger="serviceaccount-controller"
	I1002 06:13:00.446852       1 shared_informer.go:349] "Waiting for caches to sync" controller="service account"
	F1002 06:13:00.447274       1 client_builder_dynamic.go:154] Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/serviceaccounts/pvc-protection-controller": dial tcp 192.168.49.2:8441: connect: connection refused
	
	
	==> kube-proxy [97a61b088ac75deb72639eca5c93e3931e2d507d4d6d431bcb874a08c79f4fd8] <==
	I1002 06:12:14.839850       1 server_linux.go:53] "Using iptables proxy"
	I1002 06:12:14.903452       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 06:12:15.004159       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 06:12:15.004210       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1002 06:12:15.004324       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 06:12:15.023563       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 06:12:15.023618       1 server_linux.go:132] "Using iptables Proxier"
	I1002 06:12:15.029815       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 06:12:15.030570       1 server.go:527] "Version info" version="v1.34.1"
	I1002 06:12:15.030605       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 06:12:15.032931       1 config.go:200] "Starting service config controller"
	I1002 06:12:15.032981       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 06:12:15.033021       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 06:12:15.032994       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 06:12:15.033046       1 config.go:106] "Starting endpoint slice config controller"
	I1002 06:12:15.033079       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 06:12:15.033118       1 config.go:309] "Starting node config controller"
	I1002 06:12:15.033128       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 06:12:15.033134       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 06:12:15.133970       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 06:12:15.133970       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1002 06:12:15.133989       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [b42bb88d18439d6670f0d6210dbbdaf5fd87083935495b10801ef7cf68b6f13a] <==
	I1002 06:12:50.973246       1 server_linux.go:53] "Using iptables proxy"
	I1002 06:12:51.050576       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 06:12:51.151332       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 06:12:51.151368       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1002 06:12:51.151463       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 06:12:51.172896       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 06:12:51.172973       1 server_linux.go:132] "Using iptables Proxier"
	I1002 06:12:51.178306       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 06:12:51.178665       1 server.go:527] "Version info" version="v1.34.1"
	I1002 06:12:51.178703       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 06:12:51.179784       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 06:12:51.179811       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 06:12:51.179813       1 config.go:200] "Starting service config controller"
	I1002 06:12:51.179828       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 06:12:51.179858       1 config.go:106] "Starting endpoint slice config controller"
	I1002 06:12:51.179864       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 06:12:51.179865       1 config.go:309] "Starting node config controller"
	I1002 06:12:51.179883       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 06:12:51.179890       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 06:12:51.280911       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1002 06:12:51.280956       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 06:12:51.280964       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [53950e32492bd26e7a71310b1a5140df8125ff1d730bb46b83d84ae621fd3298] <==
	E1002 06:12:06.276119       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 06:12:06.276159       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 06:12:06.276232       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 06:12:06.276228       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 06:12:06.276252       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 06:12:06.276317       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 06:12:06.276332       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 06:12:06.276345       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 06:12:06.276383       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 06:12:06.276398       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 06:12:07.085724       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 06:12:07.157911       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 06:12:07.356949       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 06:12:07.371978       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 06:12:07.427138       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 06:12:07.458065       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 06:12:07.475022       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 06:12:07.488961       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	I1002 06:12:07.672968       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 06:12:57.410262       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1002 06:12:57.410597       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 06:12:57.410616       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1002 06:12:57.410653       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1002 06:12:57.410701       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1002 06:12:57.410723       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [cf1bb9911e32d0b099e600db16880966f3a50443fd65c345cbbf32cb28da5b5a] <==
	I1002 06:12:58.351746       1 serving.go:386] Generated self-signed cert in-memory
	I1002 06:12:59.028789       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 06:12:59.028819       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 06:12:59.033800       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 06:12:59.033904       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1002 06:12:59.033938       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1002 06:12:59.033975       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 06:12:59.034768       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 06:12:59.034785       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 06:12:59.034808       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 06:12:59.035924       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 06:12:59.134547       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1002 06:12:59.135726       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 06:12:59.136328       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	E1002 06:13:10.905328       1 reflector.go:205] "Failed to watch" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 06:13:10.905345       1 reflector.go:205] "Failed to watch" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 06:13:10.905353       1 reflector.go:205] "Failed to watch" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 06:13:10.905408       1 reflector.go:205] "Failed to watch" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 06:13:10.905328       1 reflector.go:205] "Failed to watch" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	
	
	==> kubelet <==
	Oct 02 06:23:21 functional-199910 kubelet[5002]: E1002 06:23:21.884573    5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="d446da26-c4e3-4120-8392-398030fe9f55"
	Oct 02 06:23:26 functional-199910 kubelet[5002]: E1002 06:23:26.883388    5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-6vrx2" podUID="540c3ceb-ae4f-4dc0-b99d-354efb31c102"
	Oct 02 06:23:30 functional-199910 kubelet[5002]: E1002 06:23:30.883991    5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-w8zxz" podUID="952a9830-06ce-4f87-9055-b90e32fb9805"
	Oct 02 06:23:31 functional-199910 kubelet[5002]: E1002 06:23:31.883978    5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vlp7x" podUID="8c621264-b906-43b5-8018-d110a9711c8c"
	Oct 02 06:23:33 functional-199910 kubelet[5002]: E1002 06:23:33.883568    5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-4mzf5" podUID="ca2b87c9-5e0e-4d06-9349-7f8d
328f5c5d"
	Oct 02 06:23:36 functional-199910 kubelet[5002]: E1002 06:23:36.883981    5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="30225e93-f680-46a4-ad8c-b4adbd692c1f"
	Oct 02 06:23:36 functional-199910 kubelet[5002]: E1002 06:23:36.884673    5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="d446da26-c4e3-4120-8392-398030fe9f55"
	Oct 02 06:23:41 functional-199910 kubelet[5002]: E1002 06:23:41.883828    5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-6vrx2" podUID="540c3ceb-ae4f-4dc0-b99d-354efb31c102"
	Oct 02 06:23:42 functional-199910 kubelet[5002]: E1002 06:23:42.884256    5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vlp7x" podUID="8c621264-b906-43b5-8018-d110a9711c8c"
	Oct 02 06:23:43 functional-199910 kubelet[5002]: E1002 06:23:43.883948    5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-w8zxz" podUID="952a9830-06ce-4f87-9055-b90e32fb9805"
	Oct 02 06:23:47 functional-199910 kubelet[5002]: E1002 06:23:47.884012    5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-4mzf5" podUID="ca2b87c9-5e0e-4d06-9349-7f8d
328f5c5d"
	Oct 02 06:23:50 functional-199910 kubelet[5002]: E1002 06:23:50.883817    5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="30225e93-f680-46a4-ad8c-b4adbd692c1f"
	Oct 02 06:23:51 functional-199910 kubelet[5002]: E1002 06:23:51.884534    5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="d446da26-c4e3-4120-8392-398030fe9f55"
	Oct 02 06:23:53 functional-199910 kubelet[5002]: E1002 06:23:53.884011    5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-6vrx2" podUID="540c3ceb-ae4f-4dc0-b99d-354efb31c102"
	Oct 02 06:23:53 functional-199910 kubelet[5002]: E1002 06:23:53.884701    5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vlp7x" podUID="8c621264-b906-43b5-8018-d110a9711c8c"
	Oct 02 06:23:54 functional-199910 kubelet[5002]: E1002 06:23:54.883805    5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-w8zxz" podUID="952a9830-06ce-4f87-9055-b90e32fb9805"
	Oct 02 06:23:58 functional-199910 kubelet[5002]: E1002 06:23:58.884853    5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-4mzf5" podUID="ca2b87c9-5e0e-4d06-9349-7f8d
328f5c5d"
	Oct 02 06:24:01 functional-199910 kubelet[5002]: E1002 06:24:01.883363    5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="30225e93-f680-46a4-ad8c-b4adbd692c1f"
	Oct 02 06:24:04 functional-199910 kubelet[5002]: E1002 06:24:04.884168    5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="d446da26-c4e3-4120-8392-398030fe9f55"
	Oct 02 06:24:06 functional-199910 kubelet[5002]: E1002 06:24:06.883720    5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-6vrx2" podUID="540c3ceb-ae4f-4dc0-b99d-354efb31c102"
	Oct 02 06:24:06 functional-199910 kubelet[5002]: E1002 06:24:06.884491    5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vlp7x" podUID="8c621264-b906-43b5-8018-d110a9711c8c"
	Oct 02 06:24:08 functional-199910 kubelet[5002]: E1002 06:24:08.883646    5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-w8zxz" podUID="952a9830-06ce-4f87-9055-b90e32fb9805"
	Oct 02 06:24:11 functional-199910 kubelet[5002]: E1002 06:24:11.884475    5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-4mzf5" podUID="ca2b87c9-5e0e-4d06-9349-7f8d
328f5c5d"
	Oct 02 06:24:14 functional-199910 kubelet[5002]: E1002 06:24:14.883485    5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="30225e93-f680-46a4-ad8c-b4adbd692c1f"
	Oct 02 06:24:16 functional-199910 kubelet[5002]: E1002 06:24:16.884358    5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="d446da26-c4e3-4120-8392-398030fe9f55"
	
	
	==> storage-provisioner [80952a8c291275da5e35ad70882231fdd9a7d83825c994efd21bb4f51557d477] <==
	I1002 06:12:26.077595       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-199910_0d7cdd1d-c7b6-42fc-bd8c-cd81e39b8dd7!
	W1002 06:12:27.985165       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:27.988636       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:29.991413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:29.995879       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:31.999091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:32.003058       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:34.006471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:34.010034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:36.013002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:36.017547       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:38.019738       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:38.023243       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:40.026764       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:40.030794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:42.033995       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:42.037680       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:44.041227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:44.046725       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:46.050103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:46.053714       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:48.056100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:48.059659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:50.063069       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:50.067736       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [fdcf1ac8db1d92b12e319a16fb30394c6f6dce3eecf4e5c46c0c0f15efea87df] <==
	W1002 06:23:51.480675       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:23:53.483328       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:23:53.487976       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:23:55.492808       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:23:55.496479       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:23:57.498650       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:23:57.501995       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:23:59.505191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:23:59.508617       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:24:01.511615       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:24:01.515208       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:24:03.518551       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:24:03.522028       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:24:05.525368       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:24:05.530483       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:24:07.533367       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:24:07.536775       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:24:09.538772       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:24:09.542901       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:24:11.545779       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:24:11.549434       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:24:13.552714       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:24:13.557025       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:24:15.559674       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:24:15.563544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-199910 -n functional-199910
helpers_test.go:269: (dbg) Run:  kubectl --context functional-199910 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-w8zxz hello-node-connect-7d85dfc575-6vrx2 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-4mzf5 kubernetes-dashboard-855c9754f9-vlp7x
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-199910 describe pod busybox-mount hello-node-75c85bcc94-w8zxz hello-node-connect-7d85dfc575-6vrx2 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-4mzf5 kubernetes-dashboard-855c9754f9-vlp7x
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-199910 describe pod busybox-mount hello-node-75c85bcc94-w8zxz hello-node-connect-7d85dfc575-6vrx2 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-4mzf5 kubernetes-dashboard-855c9754f9-vlp7x: exit status 1 (87.466456ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-199910/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 06:19:04 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  containerd://30b85d3947fb716365efd7ebc1d9aa1ae0a31acc10a239c45a439219b7aacac2
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Thu, 02 Oct 2025 06:19:06 +0000
	      Finished:     Thu, 02 Oct 2025 06:19:06 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5gvnd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-5gvnd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m12s  default-scheduler  Successfully assigned default/busybox-mount to functional-199910
	  Normal  Pulling    5m13s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m11s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.042s (2.042s including waiting). Image size: 2395207 bytes.
	  Normal  Created    5m11s  kubelet            Created container: mount-munger
	  Normal  Started    5m11s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-w8zxz
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-199910/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 06:13:50 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hfndz (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-hfndz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-w8zxz to functional-199910
	  Warning  Failed     10m                  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    7m25s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m22s (x4 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m22s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    23s (x42 over 10m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     23s (x42 over 10m)   kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-6vrx2
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-199910/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 06:13:39 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kt2br (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-kt2br:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-6vrx2 to functional-199910
	  Normal   Pulling    7m28s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m25s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m25s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    36s (x40 over 10m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     24s (x41 over 10m)   kubelet            Error: ImagePullBackOff
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-199910/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 06:13:34 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j2hd7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-j2hd7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/nginx-svc to functional-199910
	  Warning  Failed     10m                  kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    7m31s (x5 over 10m)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     7m28s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m28s (x4 over 10m)  kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    41s (x39 over 10m)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     26s (x40 over 10m)   kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-199910/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 06:13:40 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6brnk (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-6brnk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/sp-pod to functional-199910
	  Warning  Failed     8m58s (x4 over 10m)  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    7m37s (x5 over 10m)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     7m34s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m34s                kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:17ae566734b63632e543c907ba74757e0c1a25d812ab9f10a07a6bed98dd199c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    27s (x40 over 10m)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     27s (x40 over 10m)   kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-4mzf5" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-vlp7x" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-199910 describe pod busybox-mount hello-node-75c85bcc94-w8zxz hello-node-connect-7d85dfc575-6vrx2 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-4mzf5 kubernetes-dashboard-855c9754f9-vlp7x: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (302.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (602.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-199910 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-199910 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-6vrx2" [540c3ceb-ae4f-4dc0-b99d-354efb31c102] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-199910 -n functional-199910
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-02 06:23:39.846630861 +0000 UTC m=+1118.460254603
functional_test.go:1645: (dbg) Run:  kubectl --context functional-199910 describe po hello-node-connect-7d85dfc575-6vrx2 -n default
functional_test.go:1645: (dbg) kubectl --context functional-199910 describe po hello-node-connect-7d85dfc575-6vrx2 -n default:
Name:             hello-node-connect-7d85dfc575-6vrx2
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-199910/192.168.49.2
Start Time:       Thu, 02 Oct 2025 06:13:39 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kt2br (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-kt2br:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-6vrx2 to functional-199910
Normal   Pulling    6m50s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m47s (x5 over 9m54s)   kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     6m47s (x5 over 9m54s)   kubelet            Error: ErrImagePull
Warning  Failed     4m48s (x19 over 9m53s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m25s (x21 over 9m53s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-199910 logs hello-node-connect-7d85dfc575-6vrx2 -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-199910 logs hello-node-connect-7d85dfc575-6vrx2 -n default: exit status 1 (66.86068ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-6vrx2" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-199910 logs hello-node-connect-7d85dfc575-6vrx2 -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-199910 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-6vrx2
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-199910/192.168.49.2
Start Time:       Thu, 02 Oct 2025 06:13:39 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kt2br (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-kt2br:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-6vrx2 to functional-199910
Normal   Pulling    6m51s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m48s (x5 over 9m55s)   kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     6m48s (x5 over 9m55s)   kubelet            Error: ErrImagePull
Warning  Failed     4m49s (x19 over 9m54s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m26s (x21 over 9m54s)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-199910 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-199910 logs -l app=hello-node-connect: exit status 1 (61.283446ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-6vrx2" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-199910 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-199910 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.105.16.67
IPs:                      10.105.16.67
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30665/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-199910
helpers_test.go:243: (dbg) docker inspect functional-199910:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a129060d7e93b21b338c2aa00a766f151b1698e9fecb1ffe2d82b4bab89daded",
	        "Created": "2025-10-02T06:11:55.541637226Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 411620,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T06:11:55.570474432Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/a129060d7e93b21b338c2aa00a766f151b1698e9fecb1ffe2d82b4bab89daded/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a129060d7e93b21b338c2aa00a766f151b1698e9fecb1ffe2d82b4bab89daded/hostname",
	        "HostsPath": "/var/lib/docker/containers/a129060d7e93b21b338c2aa00a766f151b1698e9fecb1ffe2d82b4bab89daded/hosts",
	        "LogPath": "/var/lib/docker/containers/a129060d7e93b21b338c2aa00a766f151b1698e9fecb1ffe2d82b4bab89daded/a129060d7e93b21b338c2aa00a766f151b1698e9fecb1ffe2d82b4bab89daded-json.log",
	        "Name": "/functional-199910",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-199910:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-199910",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a129060d7e93b21b338c2aa00a766f151b1698e9fecb1ffe2d82b4bab89daded",
	                "LowerDir": "/var/lib/docker/overlay2/eb19350948b954d3f3cef2ec88b1b9dcc1bb8c2cbcc956abedd944186369d718-init/diff:/var/lib/docker/overlay2/298df2ba9683a73d350c1b6c983da9f2b87e35cf844050b5b24d44ff0e84e14d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/eb19350948b954d3f3cef2ec88b1b9dcc1bb8c2cbcc956abedd944186369d718/merged",
	                "UpperDir": "/var/lib/docker/overlay2/eb19350948b954d3f3cef2ec88b1b9dcc1bb8c2cbcc956abedd944186369d718/diff",
	                "WorkDir": "/var/lib/docker/overlay2/eb19350948b954d3f3cef2ec88b1b9dcc1bb8c2cbcc956abedd944186369d718/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-199910",
	                "Source": "/var/lib/docker/volumes/functional-199910/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-199910",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-199910",
	                "name.minikube.sigs.k8s.io": "functional-199910",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f968325eef651f67db13113c73a3310ee76a7c88af5a211cc222343e85ee43d1",
	            "SandboxKey": "/var/run/docker/netns/f968325eef65",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33159"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33160"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33163"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33161"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-199910": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:65:7a:15:4b:c7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7d66feb0971ca31aa50fbd8d10400dca354f44739c3efb8d06e897cb43ffc6b4",
	                    "EndpointID": "50a6d2ecb2dfd2667a6e29fd9b2eea174bfafa2a4794ff1c52dc85ca797a6a00",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-199910",
	                        "a129060d7e93"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-199910 -n functional-199910
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-199910 logs -n 25: (1.134459247s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                        ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-199910 ssh findmnt -T /mount-9p | grep 9p                                                               │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │ 02 Oct 25 06:19 UTC │
	│ ssh            │ functional-199910 ssh -- ls -la /mount-9p                                                                          │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │ 02 Oct 25 06:19 UTC │
	│ ssh            │ functional-199910 ssh sudo umount -f /mount-9p                                                                     │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │                     │
	│ mount          │ -p functional-199910 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1912355386/001:/mount3 --alsologtostderr -v=1 │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │                     │
	│ ssh            │ functional-199910 ssh findmnt -T /mount1                                                                           │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │                     │
	│ mount          │ -p functional-199910 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1912355386/001:/mount1 --alsologtostderr -v=1 │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │                     │
	│ mount          │ -p functional-199910 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1912355386/001:/mount2 --alsologtostderr -v=1 │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │                     │
	│ ssh            │ functional-199910 ssh findmnt -T /mount1                                                                           │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │ 02 Oct 25 06:19 UTC │
	│ ssh            │ functional-199910 ssh findmnt -T /mount2                                                                           │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │ 02 Oct 25 06:19 UTC │
	│ ssh            │ functional-199910 ssh findmnt -T /mount3                                                                           │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │ 02 Oct 25 06:19 UTC │
	│ mount          │ -p functional-199910 --kill=true                                                                                   │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │                     │
	│ start          │ -p functional-199910 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd    │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │                     │
	│ start          │ -p functional-199910 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd    │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │                     │
	│ start          │ -p functional-199910 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd              │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-199910 --alsologtostderr -v=1                                                     │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │                     │
	│ update-context │ functional-199910 update-context --alsologtostderr -v=2                                                            │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │ 02 Oct 25 06:19 UTC │
	│ update-context │ functional-199910 update-context --alsologtostderr -v=2                                                            │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │ 02 Oct 25 06:19 UTC │
	│ update-context │ functional-199910 update-context --alsologtostderr -v=2                                                            │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │ 02 Oct 25 06:19 UTC │
	│ image          │ functional-199910 image ls --format short --alsologtostderr                                                        │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │ 02 Oct 25 06:19 UTC │
	│ image          │ functional-199910 image ls --format yaml --alsologtostderr                                                         │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │ 02 Oct 25 06:19 UTC │
	│ ssh            │ functional-199910 ssh pgrep buildkitd                                                                              │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │                     │
	│ image          │ functional-199910 image build -t localhost/my-image:functional-199910 testdata/build --alsologtostderr             │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │ 02 Oct 25 06:19 UTC │
	│ image          │ functional-199910 image ls                                                                                         │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │ 02 Oct 25 06:19 UTC │
	│ image          │ functional-199910 image ls --format json --alsologtostderr                                                         │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │ 02 Oct 25 06:19 UTC │
	│ image          │ functional-199910 image ls --format table --alsologtostderr                                                        │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │ 02 Oct 25 06:19 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 06:19:15
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 06:19:15.254520  429832 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:19:15.254770  429832 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:19:15.254780  429832 out.go:374] Setting ErrFile to fd 2...
	I1002 06:19:15.254792  429832 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:19:15.255023  429832 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-375701/.minikube/bin
	I1002 06:19:15.255645  429832 out.go:368] Setting JSON to false
	I1002 06:19:15.256783  429832 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":7298,"bootTime":1759378657,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 06:19:15.256873  429832 start.go:140] virtualization: kvm guest
	I1002 06:19:15.258505  429832 out.go:179] * [functional-199910] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 06:19:15.259591  429832 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 06:19:15.259597  429832 notify.go:220] Checking for updates...
	I1002 06:19:15.260831  429832 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 06:19:15.262162  429832 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-375701/kubeconfig
	I1002 06:19:15.263266  429832 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-375701/.minikube
	I1002 06:19:15.264267  429832 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 06:19:15.265202  429832 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 06:19:15.266577  429832 config.go:182] Loaded profile config "functional-199910": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1002 06:19:15.267099  429832 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 06:19:15.289007  429832 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I1002 06:19:15.289098  429832 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:19:15.340141  429832 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 06:19:15.330436749 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:19:15.340263  429832 docker.go:318] overlay module found
	I1002 06:19:15.341738  429832 out.go:179] * Using the docker driver based on existing profile
	I1002 06:19:15.342748  429832 start.go:304] selected driver: docker
	I1002 06:19:15.342764  429832 start.go:924] validating driver "docker" against &{Name:functional-199910 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-199910 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:19:15.342901  429832 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 06:19:15.343027  429832 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:19:15.398873  429832 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 06:19:15.389220623 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:19:15.399597  429832 cni.go:84] Creating CNI manager for ""
	I1002 06:19:15.399659  429832 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1002 06:19:15.399708  429832 start.go:348] cluster config:
	{Name:functional-199910 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-199910 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:19:15.401261  429832 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	30b85d3947fb7       56cc512116c8f       4 minutes ago       Exited              mount-munger              0                   dfc668eebccd9       busybox-mount                               default
	f6fa8a43c86fe       5107333e08a87       10 minutes ago      Running             mysql                     0                   070ec94bcd8f3       mysql-5bb876957f-cvvj2                      default
	c2503d55a98b1       c3994bc696102       10 minutes ago      Running             kube-apiserver            0                   c790b8963b464       kube-apiserver-functional-199910            kube-system
	0826543035037       c80c8dbafe7dd       10 minutes ago      Running             kube-controller-manager   2                   b57c704d4fda4       kube-controller-manager-functional-199910   kube-system
	a46012c8d77f4       c80c8dbafe7dd       10 minutes ago      Exited              kube-controller-manager   1                   b57c704d4fda4       kube-controller-manager-functional-199910   kube-system
	cf1bb9911e32d       7dd6aaa1717ab       10 minutes ago      Running             kube-scheduler            1                   054fb86bca056       kube-scheduler-functional-199910            kube-system
	73ee8e97a7860       5f1f5298c888d       10 minutes ago      Running             etcd                      1                   1a30670caae60       etcd-functional-199910                      kube-system
	fdcf1ac8db1d9       6e38f40d628db       10 minutes ago      Running             storage-provisioner       1                   a5747f1c0a73f       storage-provisioner                         kube-system
	f0bafa2f4b2c3       52546a367cc9e       10 minutes ago      Running             coredns                   1                   1893e74468bc9       coredns-66bc5c9577-lfbdz                    kube-system
	b42bb88d18439       fc25172553d79       10 minutes ago      Running             kube-proxy                1                   1e56f086695d1       kube-proxy-6fsg9                            kube-system
	ddd36a4ac2f9f       409467f978b4a       10 minutes ago      Running             kindnet-cni               1                   3b55bb31d621a       kindnet-nlvlv                               kube-system
	ff9cd2a0d98dc       52546a367cc9e       11 minutes ago      Exited              coredns                   0                   1893e74468bc9       coredns-66bc5c9577-lfbdz                    kube-system
	80952a8c29127       6e38f40d628db       11 minutes ago      Exited              storage-provisioner       0                   a5747f1c0a73f       storage-provisioner                         kube-system
	0e6365f8d4553       409467f978b4a       11 minutes ago      Exited              kindnet-cni               0                   3b55bb31d621a       kindnet-nlvlv                               kube-system
	97a61b088ac75       fc25172553d79       11 minutes ago      Exited              kube-proxy                0                   1e56f086695d1       kube-proxy-6fsg9                            kube-system
	08e26663c7e44       5f1f5298c888d       11 minutes ago      Exited              etcd                      0                   1a30670caae60       etcd-functional-199910                      kube-system
	53950e32492bd       7dd6aaa1717ab       11 minutes ago      Exited              kube-scheduler            0                   054fb86bca056       kube-scheduler-functional-199910            kube-system
	
	
	==> containerd <==
	Oct 02 06:20:04 functional-199910 containerd[3842]: time="2025-10-02T06:20:04.884796936Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Oct 02 06:20:04 functional-199910 containerd[3842]: time="2025-10-02T06:20:04.886503009Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 06:20:05 functional-199910 containerd[3842]: time="2025-10-02T06:20:05.475070954Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 06:20:07 functional-199910 containerd[3842]: time="2025-10-02T06:20:07.106347197Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 06:20:07 functional-199910 containerd[3842]: time="2025-10-02T06:20:07.106419885Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11046"
	Oct 02 06:20:43 functional-199910 containerd[3842]: time="2025-10-02T06:20:43.884544461Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Oct 02 06:20:43 functional-199910 containerd[3842]: time="2025-10-02T06:20:43.886118204Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 06:20:44 functional-199910 containerd[3842]: time="2025-10-02T06:20:44.462408812Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 06:20:46 functional-199910 containerd[3842]: time="2025-10-02T06:20:46.454784488Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 06:20:46 functional-199910 containerd[3842]: time="2025-10-02T06:20:46.454808932Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=12709"
	Oct 02 06:20:56 functional-199910 containerd[3842]: time="2025-10-02T06:20:56.884513245Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Oct 02 06:20:56 functional-199910 containerd[3842]: time="2025-10-02T06:20:56.886275527Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 06:20:57 functional-199910 containerd[3842]: time="2025-10-02T06:20:57.461979996Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 06:20:59 functional-199910 containerd[3842]: time="2025-10-02T06:20:59.089171758Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 06:20:59 functional-199910 containerd[3842]: time="2025-10-02T06:20:59.089223010Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11046"
	Oct 02 06:22:19 functional-199910 containerd[3842]: time="2025-10-02T06:22:19.884785044Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Oct 02 06:22:19 functional-199910 containerd[3842]: time="2025-10-02T06:22:19.886572046Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 06:22:20 functional-199910 containerd[3842]: time="2025-10-02T06:22:20.500547860Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 06:22:22 functional-199910 containerd[3842]: time="2025-10-02T06:22:22.127605836Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 06:22:22 functional-199910 containerd[3842]: time="2025-10-02T06:22:22.127686885Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11015"
	Oct 02 06:22:28 functional-199910 containerd[3842]: time="2025-10-02T06:22:28.884730907Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Oct 02 06:22:28 functional-199910 containerd[3842]: time="2025-10-02T06:22:28.886326946Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 06:22:29 functional-199910 containerd[3842]: time="2025-10-02T06:22:29.462579836Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 06:22:31 functional-199910 containerd[3842]: time="2025-10-02T06:22:31.105354289Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 06:22:31 functional-199910 containerd[3842]: time="2025-10-02T06:22:31.105447128Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11047"
	
	
	==> coredns [f0bafa2f4b2c3011baa87254e1977f39f0be514d931e8c686e86c0aa29d3b6ff] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34140 - 64363 "HINFO IN 6421585372913567829.3122299182694518145. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.036751794s
	
	
	==> coredns [ff9cd2a0d98dc28d87af1e35cad013fc327b6424af6df9d7e63b16213372132f] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58400 - 64231 "HINFO IN 6580085158149520847.7999093773305824839. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.113539547s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-199910
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-199910
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb
	                    minikube.k8s.io/name=functional-199910
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T06_12_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 06:12:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-199910
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 06:23:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 06:20:09 +0000   Thu, 02 Oct 2025 06:12:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 06:20:09 +0000   Thu, 02 Oct 2025 06:12:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 06:20:09 +0000   Thu, 02 Oct 2025 06:12:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 06:20:09 +0000   Thu, 02 Oct 2025 06:12:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-199910
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 26786d65ffe140308153a7ab60e7851e
	  System UUID:                d452a066-3b39-4b2a-bb48-a6d5f3f27351
	  Boot ID:                    928ae711-d7b1-4c1e-8d35-81d1dcf6c7b5
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.28
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-w8zxz                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m51s
	  default                     hello-node-connect-7d85dfc575-6vrx2           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-cvvj2                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-lfbdz                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-functional-199910                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-nlvlv                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-functional-199910              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-199910     200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-6fsg9                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-199910              100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-4mzf5    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m25s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-vlp7x         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m                kubelet          Node functional-199910 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m                kubelet          Node functional-199910 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m                kubelet          Node functional-199910 status is now: NodeHasSufficientPID
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           11m                node-controller  Node functional-199910 event: Registered Node functional-199910 in Controller
	  Normal  NodeReady                11m                kubelet          Node functional-199910 status is now: NodeReady
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-199910 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-199910 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node functional-199910 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10m                node-controller  Node functional-199910 event: Registered Node functional-199910 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c2 5d 35 94 2e 01 08 06
	[  +0.058144] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 62 ab 58 d0 fd cd 08 06
	[  +7.548229] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4a fd c4 dd c4 6d 08 06
	[Oct 2 05:45] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f6 b4 f2 37 23 6e 08 06
	[  +8.618588] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8e d9 8c 9f 19 f9 08 06
	[  +0.000520] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 4a fd c4 dd c4 6d 08 06
	[  +0.839544] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 42 98 6a 33 ef 13 08 06
	[ +18.414075] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 16 fd ef 12 40 02 08 06
	[  +0.000360] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 42 98 6a 33 ef 13 08 06
	[  +5.829441] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8a 7c 73 c3 88 96 08 06
	[  +0.000311] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 62 ab 58 d0 fd cd 08 06
	[ +15.373470] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 de db d2 97 bd 08 06
	[  +0.000392] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 b4 f2 37 23 6e 08 06
	
	
	==> etcd [08e26663c7e447c1795a392880992c67b7efa0c467f1cc535f872ec73d63ad38] <==
	{"level":"warn","ts":"2025-10-02T06:12:05.810669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:12:05.816571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:12:05.822847Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:12:05.835996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:12:05.842577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:12:05.849081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:12:05.894403Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57004","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T06:12:50.300585Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-02T06:12:50.300664Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-199910","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-02T06:12:50.300764Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-02T06:12:57.302505Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-02T06:12:57.302730Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T06:12:57.303059Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T06:12:57.303085Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T06:12:57.302839Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-02T06:12:57.303142Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-02T06:12:57.303158Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"error","ts":"2025-10-02T06:12:57.302240Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"warn","ts":"2025-10-02T06:12:57.303485Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T06:12:57.303507Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T06:12:57.303516Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T06:12:57.304975Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-02T06:12:57.305029Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T06:12:57.305061Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-02T06:12:57.305075Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-199910","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [73ee8e97a78602fac85e902429f6351307dfd75d4e432518b658ad78e1e9d2b1] <==
	{"level":"warn","ts":"2025-10-02T06:13:10.328017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.335433Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.342042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.352795Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50930","server-name":"","error":"read tcp 127.0.0.1:2379->127.0.0.1:50930: read: connection reset by peer"}
	{"level":"warn","ts":"2025-10-02T06:13:10.360606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.367410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.373004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.379059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.384978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.391903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.398291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.404172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.410209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.415890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.421667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.427687Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.433534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.448100Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.451132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.456833Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.462331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.511748Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51280","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T06:23:10.057111Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1114}
	{"level":"info","ts":"2025-10-02T06:23:10.075219Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1114,"took":"17.775311ms","hash":1240741800,"current-db-size-bytes":3690496,"current-db-size":"3.7 MB","current-db-size-in-use-bytes":1880064,"current-db-size-in-use":"1.9 MB"}
	{"level":"info","ts":"2025-10-02T06:23:10.075258Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1240741800,"revision":1114,"compact-revision":-1}
	
	
	==> kernel <==
	 06:23:41 up  2:06,  0 user,  load average: 0.07, 0.32, 0.64
	Linux functional-199910 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0e6365f8d4553ed3a43b91cd235f10a301c95e3ea2ce0b800d81621f382c5540] <==
	I1002 06:12:15.314795       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 06:12:15.315055       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1002 06:12:15.315240       1 main.go:148] setting mtu 1500 for CNI 
	I1002 06:12:15.315261       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 06:12:15.315278       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T06:12:15Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 06:12:15.436330       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 06:12:15.436387       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 06:12:15.436773       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 06:12:15.614228       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1002 06:12:15.914278       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 06:12:15.914307       1 metrics.go:72] Registering metrics
	I1002 06:12:15.914403       1 controller.go:711] "Syncing nftables rules"
	I1002 06:12:25.437165       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:12:25.437244       1 main.go:301] handling current node
	I1002 06:12:35.444020       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:12:35.444049       1 main.go:301] handling current node
	I1002 06:12:45.439055       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:12:45.439091       1 main.go:301] handling current node
	
	
	==> kindnet [ddd36a4ac2f9f0eb8d3a5fb2bf60cae5868d293c2b8e98bf9f4f9f13c884ba40] <==
	I1002 06:21:41.332286       1 main.go:301] handling current node
	I1002 06:21:51.335264       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:21:51.335297       1 main.go:301] handling current node
	I1002 06:22:01.337210       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:22:01.337253       1 main.go:301] handling current node
	I1002 06:22:11.334753       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:22:11.334785       1 main.go:301] handling current node
	I1002 06:22:21.334694       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:22:21.334726       1 main.go:301] handling current node
	I1002 06:22:31.340901       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:22:31.340954       1 main.go:301] handling current node
	I1002 06:22:41.334569       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:22:41.334622       1 main.go:301] handling current node
	I1002 06:22:51.332667       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:22:51.332703       1 main.go:301] handling current node
	I1002 06:23:01.331741       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:23:01.331831       1 main.go:301] handling current node
	I1002 06:23:11.333511       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:23:11.333545       1 main.go:301] handling current node
	I1002 06:23:21.335778       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:23:21.335809       1 main.go:301] handling current node
	I1002 06:23:31.340881       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:23:31.340935       1 main.go:301] handling current node
	I1002 06:23:41.339996       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:23:41.340031       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c2503d55a98b117f95312339e0408d46140df26986746e655ee44ca4b17d1543] <==
	I1002 06:13:10.968328       1 cache.go:39] Caches are synced for autoregister controller
	I1002 06:13:10.973719       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 06:13:11.003861       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 06:13:11.864528       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 06:13:12.011502       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W1002 06:13:12.168303       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1002 06:13:12.169465       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 06:13:12.174164       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 06:13:12.718173       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1002 06:13:12.803056       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 06:13:12.845127       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 06:13:12.850799       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 06:13:27.458495       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.96.42.200"}
	I1002 06:13:32.001334       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.106.74.230"}
	I1002 06:13:32.038247       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1002 06:13:34.269456       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.98.97.18"}
	I1002 06:13:39.536669       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.105.16.67"}
	E1002 06:13:47.153716       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:57876: use of closed network connection
	E1002 06:13:48.672727       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:57894: use of closed network connection
	E1002 06:13:50.778146       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:50950: use of closed network connection
	I1002 06:13:50.901947       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.100.248.3"}
	I1002 06:19:16.194630       1 controller.go:667] quota admission added evaluator for: namespaces
	I1002 06:19:16.288521       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.174.148"}
	I1002 06:19:16.299708       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.133.92"}
	I1002 06:23:10.900793       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [0826543035037388cfadc1a20ffaeab10d0bd916e3a71611034dc156659ba3d3] <==
	I1002 06:13:14.311871       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1002 06:13:14.311885       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1002 06:13:14.312075       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1002 06:13:14.313120       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1002 06:13:14.313147       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1002 06:13:14.313172       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1002 06:13:14.313212       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1002 06:13:14.313272       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1002 06:13:14.313932       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1002 06:13:14.313956       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1002 06:13:14.316028       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1002 06:13:14.318325       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1002 06:13:14.318470       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 06:13:14.321892       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1002 06:13:14.323622       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 06:13:14.323638       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 06:13:14.323648       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1002 06:13:14.325834       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1002 06:13:14.331979       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1002 06:19:16.235527       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 06:19:16.239715       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 06:19:16.242312       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 06:19:16.244051       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 06:19:16.245736       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 06:19:16.250431       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [a46012c8d77f4d59e1edfd2706d75b7ed32740ed7840142c8b6a163ad8125ce4] <==
	I1002 06:12:58.441929       1 serving.go:386] Generated self-signed cert in-memory
	I1002 06:12:59.039518       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1002 06:12:59.039550       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 06:12:59.041673       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1002 06:12:59.041844       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1002 06:12:59.046004       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1002 06:12:59.046753       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 06:12:59.057653       1 controllermanager.go:781] "Started controller" controller="serviceaccount-token-controller"
	I1002 06:12:59.057945       1 shared_informer.go:349] "Waiting for caches to sync" controller="tokens"
	I1002 06:13:00.446798       1 controllermanager.go:781] "Started controller" controller="serviceaccount-controller"
	I1002 06:13:00.446836       1 serviceaccounts_controller.go:114] "Starting service account controller" logger="serviceaccount-controller"
	I1002 06:13:00.446852       1 shared_informer.go:349] "Waiting for caches to sync" controller="service account"
	F1002 06:13:00.447274       1 client_builder_dynamic.go:154] Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/serviceaccounts/pvc-protection-controller": dial tcp 192.168.49.2:8441: connect: connection refused
	
	
	==> kube-proxy [97a61b088ac75deb72639eca5c93e3931e2d507d4d6d431bcb874a08c79f4fd8] <==
	I1002 06:12:14.839850       1 server_linux.go:53] "Using iptables proxy"
	I1002 06:12:14.903452       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 06:12:15.004159       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 06:12:15.004210       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1002 06:12:15.004324       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 06:12:15.023563       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 06:12:15.023618       1 server_linux.go:132] "Using iptables Proxier"
	I1002 06:12:15.029815       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 06:12:15.030570       1 server.go:527] "Version info" version="v1.34.1"
	I1002 06:12:15.030605       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 06:12:15.032931       1 config.go:200] "Starting service config controller"
	I1002 06:12:15.032981       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 06:12:15.033021       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 06:12:15.032994       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 06:12:15.033046       1 config.go:106] "Starting endpoint slice config controller"
	I1002 06:12:15.033079       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 06:12:15.033118       1 config.go:309] "Starting node config controller"
	I1002 06:12:15.033128       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 06:12:15.033134       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 06:12:15.133970       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 06:12:15.133970       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1002 06:12:15.133989       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [b42bb88d18439d6670f0d6210dbbdaf5fd87083935495b10801ef7cf68b6f13a] <==
	I1002 06:12:50.973246       1 server_linux.go:53] "Using iptables proxy"
	I1002 06:12:51.050576       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 06:12:51.151332       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 06:12:51.151368       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1002 06:12:51.151463       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 06:12:51.172896       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 06:12:51.172973       1 server_linux.go:132] "Using iptables Proxier"
	I1002 06:12:51.178306       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 06:12:51.178665       1 server.go:527] "Version info" version="v1.34.1"
	I1002 06:12:51.178703       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 06:12:51.179784       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 06:12:51.179811       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 06:12:51.179813       1 config.go:200] "Starting service config controller"
	I1002 06:12:51.179828       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 06:12:51.179858       1 config.go:106] "Starting endpoint slice config controller"
	I1002 06:12:51.179864       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 06:12:51.179865       1 config.go:309] "Starting node config controller"
	I1002 06:12:51.179883       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 06:12:51.179890       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 06:12:51.280911       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1002 06:12:51.280956       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 06:12:51.280964       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [53950e32492bd26e7a71310b1a5140df8125ff1d730bb46b83d84ae621fd3298] <==
	E1002 06:12:06.276119       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 06:12:06.276159       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 06:12:06.276232       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 06:12:06.276228       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 06:12:06.276252       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 06:12:06.276317       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 06:12:06.276332       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 06:12:06.276345       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 06:12:06.276383       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 06:12:06.276398       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 06:12:07.085724       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 06:12:07.157911       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 06:12:07.356949       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 06:12:07.371978       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 06:12:07.427138       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 06:12:07.458065       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 06:12:07.475022       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 06:12:07.488961       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	I1002 06:12:07.672968       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 06:12:57.410262       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1002 06:12:57.410597       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 06:12:57.410616       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1002 06:12:57.410653       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1002 06:12:57.410701       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1002 06:12:57.410723       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [cf1bb9911e32d0b099e600db16880966f3a50443fd65c345cbbf32cb28da5b5a] <==
	I1002 06:12:58.351746       1 serving.go:386] Generated self-signed cert in-memory
	I1002 06:12:59.028789       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 06:12:59.028819       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 06:12:59.033800       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 06:12:59.033904       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1002 06:12:59.033938       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1002 06:12:59.033975       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 06:12:59.034768       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 06:12:59.034785       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 06:12:59.034808       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 06:12:59.035924       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 06:12:59.134547       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1002 06:12:59.135726       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 06:12:59.136328       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	E1002 06:13:10.905328       1 reflector.go:205] "Failed to watch" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 06:13:10.905345       1 reflector.go:205] "Failed to watch" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 06:13:10.905353       1 reflector.go:205] "Failed to watch" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 06:13:10.905408       1 reflector.go:205] "Failed to watch" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 06:13:10.905328       1 reflector.go:205] "Failed to watch" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	
	
	==> kubelet <==
	Oct 02 06:22:46 functional-199910 kubelet[5002]: E1002 06:22:46.884510    5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-4mzf5" podUID="ca2b87c9-5e0e-4d06-9349-7f8d
328f5c5d"
	Oct 02 06:22:49 functional-199910 kubelet[5002]: E1002 06:22:49.883673    5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-6vrx2" podUID="540c3ceb-ae4f-4dc0-b99d-354efb31c102"
	Oct 02 06:22:50 functional-199910 kubelet[5002]: E1002 06:22:50.883738    5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-w8zxz" podUID="952a9830-06ce-4f87-9055-b90e32fb9805"
	Oct 02 06:22:51 functional-199910 kubelet[5002]: E1002 06:22:51.884462    5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vlp7x" podUID="8c621264-b906-43b5-8018-d110a9711c8c"
	Oct 02 06:22:55 functional-199910 kubelet[5002]: E1002 06:22:55.883884    5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="d446da26-c4e3-4120-8392-398030fe9f55"
	Oct 02 06:22:56 functional-199910 kubelet[5002]: E1002 06:22:56.883948    5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="30225e93-f680-46a4-ad8c-b4adbd692c1f"
	Oct 02 06:22:57 functional-199910 kubelet[5002]: E1002 06:22:57.884151    5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-4mzf5" podUID="ca2b87c9-5e0e-4d06-9349-7f8d
328f5c5d"
	Oct 02 06:23:01 functional-199910 kubelet[5002]: E1002 06:23:01.883977    5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-6vrx2" podUID="540c3ceb-ae4f-4dc0-b99d-354efb31c102"
	Oct 02 06:23:05 functional-199910 kubelet[5002]: E1002 06:23:05.883715    5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-w8zxz" podUID="952a9830-06ce-4f87-9055-b90e32fb9805"
	Oct 02 06:23:05 functional-199910 kubelet[5002]: E1002 06:23:05.884382    5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vlp7x" podUID="8c621264-b906-43b5-8018-d110a9711c8c"
	Oct 02 06:23:06 functional-199910 kubelet[5002]: E1002 06:23:06.884523    5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="d446da26-c4e3-4120-8392-398030fe9f55"
	Oct 02 06:23:08 functional-199910 kubelet[5002]: E1002 06:23:08.884110    5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="30225e93-f680-46a4-ad8c-b4adbd692c1f"
	Oct 02 06:23:08 functional-199910 kubelet[5002]: E1002 06:23:08.884519    5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-4mzf5" podUID="ca2b87c9-5e0e-4d06-9349-7f8d
328f5c5d"
	Oct 02 06:23:14 functional-199910 kubelet[5002]: E1002 06:23:14.884086    5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-6vrx2" podUID="540c3ceb-ae4f-4dc0-b99d-354efb31c102"
	Oct 02 06:23:18 functional-199910 kubelet[5002]: E1002 06:23:18.884882    5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vlp7x" podUID="8c621264-b906-43b5-8018-d110a9711c8c"
	Oct 02 06:23:19 functional-199910 kubelet[5002]: E1002 06:23:19.883572    5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-w8zxz" podUID="952a9830-06ce-4f87-9055-b90e32fb9805"
	Oct 02 06:23:20 functional-199910 kubelet[5002]: E1002 06:23:20.884009    5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-4mzf5" podUID="ca2b87c9-5e0e-4d06-9349-7f8d
328f5c5d"
	Oct 02 06:23:21 functional-199910 kubelet[5002]: E1002 06:23:21.884043    5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="30225e93-f680-46a4-ad8c-b4adbd692c1f"
	Oct 02 06:23:21 functional-199910 kubelet[5002]: E1002 06:23:21.884573    5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="d446da26-c4e3-4120-8392-398030fe9f55"
	Oct 02 06:23:26 functional-199910 kubelet[5002]: E1002 06:23:26.883388    5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-6vrx2" podUID="540c3ceb-ae4f-4dc0-b99d-354efb31c102"
	Oct 02 06:23:30 functional-199910 kubelet[5002]: E1002 06:23:30.883991    5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-w8zxz" podUID="952a9830-06ce-4f87-9055-b90e32fb9805"
	Oct 02 06:23:31 functional-199910 kubelet[5002]: E1002 06:23:31.883978    5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vlp7x" podUID="8c621264-b906-43b5-8018-d110a9711c8c"
	Oct 02 06:23:33 functional-199910 kubelet[5002]: E1002 06:23:33.883568    5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-4mzf5" podUID="ca2b87c9-5e0e-4d06-9349-7f8d
328f5c5d"
	Oct 02 06:23:36 functional-199910 kubelet[5002]: E1002 06:23:36.883981    5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="30225e93-f680-46a4-ad8c-b4adbd692c1f"
	Oct 02 06:23:36 functional-199910 kubelet[5002]: E1002 06:23:36.884673    5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="d446da26-c4e3-4120-8392-398030fe9f55"
	
	
	==> storage-provisioner [80952a8c291275da5e35ad70882231fdd9a7d83825c994efd21bb4f51557d477] <==
	I1002 06:12:26.077595       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-199910_0d7cdd1d-c7b6-42fc-bd8c-cd81e39b8dd7!
	W1002 06:12:27.985165       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:27.988636       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:29.991413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:29.995879       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:31.999091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:32.003058       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:34.006471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:34.010034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:36.013002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:36.017547       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:38.019738       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:38.023243       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:40.026764       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:40.030794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:42.033995       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:42.037680       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:44.041227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:44.046725       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:46.050103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:46.053714       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:48.056100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:48.059659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:50.063069       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:50.067736       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [fdcf1ac8db1d92b12e319a16fb30394c6f6dce3eecf4e5c46c0c0f15efea87df] <==
	W1002 06:23:17.366145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:23:19.368748       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:23:19.372324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:23:21.375227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:23:21.378818       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:23:23.381608       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:23:23.385070       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:23:25.388521       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:23:25.392082       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:23:27.394795       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:23:27.399104       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:23:29.401674       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:23:29.405246       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:23:31.408127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:23:31.412047       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:23:33.415200       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:23:33.419076       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:23:35.422451       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:23:35.426795       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:23:37.429364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:23:37.433036       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:23:39.435675       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:23:39.440006       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:23:41.442779       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:23:41.446522       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-199910 -n functional-199910
helpers_test.go:269: (dbg) Run:  kubectl --context functional-199910 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-w8zxz hello-node-connect-7d85dfc575-6vrx2 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-4mzf5 kubernetes-dashboard-855c9754f9-vlp7x
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-199910 describe pod busybox-mount hello-node-75c85bcc94-w8zxz hello-node-connect-7d85dfc575-6vrx2 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-4mzf5 kubernetes-dashboard-855c9754f9-vlp7x
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-199910 describe pod busybox-mount hello-node-75c85bcc94-w8zxz hello-node-connect-7d85dfc575-6vrx2 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-4mzf5 kubernetes-dashboard-855c9754f9-vlp7x: exit status 1 (88.180602ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-199910/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 06:19:04 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  containerd://30b85d3947fb716365efd7ebc1d9aa1ae0a31acc10a239c45a439219b7aacac2
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Thu, 02 Oct 2025 06:19:06 +0000
	      Finished:     Thu, 02 Oct 2025 06:19:06 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5gvnd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-5gvnd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  4m37s  default-scheduler  Successfully assigned default/busybox-mount to functional-199910
	  Normal  Pulling    4m38s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     4m36s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.042s (2.042s including waiting). Image size: 2395207 bytes.
	  Normal  Created    4m36s  kubelet            Created container: mount-munger
	  Normal  Started    4m36s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-w8zxz
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-199910/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 06:13:50 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hfndz (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-hfndz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m51s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-w8zxz to functional-199910
	  Warning  Failed     9m34s                   kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    6m50s (x5 over 9m51s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m47s (x4 over 9m49s)   kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     6m47s (x5 over 9m49s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m39s (x21 over 9m48s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m39s (x21 over 9m48s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-6vrx2
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-199910/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 06:13:39 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kt2br (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-kt2br:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-6vrx2 to functional-199910
	  Normal   Pulling    6m53s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m50s (x5 over 9m57s)   kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     6m50s (x5 over 9m57s)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m51s (x19 over 9m56s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    1s (x40 over 9m56s)     kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-199910/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 06:13:34 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j2hd7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-j2hd7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/nginx-svc to functional-199910
	  Warning  Failed     9m59s                   kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    6m56s (x5 over 10m)     kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     6m53s (x5 over 9m59s)   kubelet            Error: ErrImagePull
	  Warning  Failed     6m53s (x4 over 9m43s)   kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     4m54s (x19 over 9m59s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    6s (x39 over 9m59s)     kubelet            Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-199910/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 06:13:40 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6brnk (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-6brnk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/sp-pod to functional-199910
	  Warning  Failed     8m23s (x4 over 9m55s)   kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    7m2s (x5 over 10m)      kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     6m59s (x5 over 9m55s)   kubelet            Error: ErrImagePull
	  Warning  Failed     6m59s                   kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:17ae566734b63632e543c907ba74757e0c1a25d812ab9f10a07a6bed98dd199c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     4m51s (x19 over 9m54s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m24s (x21 over 9m54s)  kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-4mzf5" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-vlp7x" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-199910 describe pod busybox-mount hello-node-75c85bcc94-w8zxz hello-node-connect-7d85dfc575-6vrx2 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-4mzf5 kubernetes-dashboard-855c9754f9-vlp7x: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (602.70s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (368.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [edf0f9b4-fe07-4a9c-aba7-8e48c450625a] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003627665s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-199910 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-199910 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-199910 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-199910 apply -f testdata/storage-provisioner/pod.yaml
I1002 06:13:40.347695  379278 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [30225e93-f680-46a4-ad8c-b4adbd692c1f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:140: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 6m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-199910 -n functional-199910
functional_test_pvc_test.go:140: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-10-02 06:19:40.657072569 +0000 UTC m=+879.270696316
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-199910 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-199910 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-199910/192.168.49.2
Start Time:       Thu, 02 Oct 2025 06:13:40 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:  10.244.0.7
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6brnk (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-6brnk:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  6m                     default-scheduler  Successfully assigned default/sp-pod to functional-199910
Warning  Failed     4m21s (x4 over 5m53s)  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    3m (x5 over 6m)        kubelet            Pulling image "docker.io/nginx"
Warning  Failed     2m57s (x5 over 5m53s)  kubelet            Error: ErrImagePull
Warning  Failed     2m57s                  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:17ae566734b63632e543c907ba74757e0c1a25d812ab9f10a07a6bed98dd199c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     49s (x19 over 5m52s)   kubelet            Error: ImagePullBackOff
Normal   BackOff    22s (x21 over 5m52s)   kubelet            Back-off pulling image "docker.io/nginx"
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-199910 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) Non-zero exit: kubectl --context functional-199910 logs sp-pod -n default: exit status 1 (57.889247ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:140: kubectl --context functional-199910 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-199910
helpers_test.go:243: (dbg) docker inspect functional-199910:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a129060d7e93b21b338c2aa00a766f151b1698e9fecb1ffe2d82b4bab89daded",
	        "Created": "2025-10-02T06:11:55.541637226Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 411620,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T06:11:55.570474432Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/a129060d7e93b21b338c2aa00a766f151b1698e9fecb1ffe2d82b4bab89daded/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a129060d7e93b21b338c2aa00a766f151b1698e9fecb1ffe2d82b4bab89daded/hostname",
	        "HostsPath": "/var/lib/docker/containers/a129060d7e93b21b338c2aa00a766f151b1698e9fecb1ffe2d82b4bab89daded/hosts",
	        "LogPath": "/var/lib/docker/containers/a129060d7e93b21b338c2aa00a766f151b1698e9fecb1ffe2d82b4bab89daded/a129060d7e93b21b338c2aa00a766f151b1698e9fecb1ffe2d82b4bab89daded-json.log",
	        "Name": "/functional-199910",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-199910:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-199910",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a129060d7e93b21b338c2aa00a766f151b1698e9fecb1ffe2d82b4bab89daded",
	                "LowerDir": "/var/lib/docker/overlay2/eb19350948b954d3f3cef2ec88b1b9dcc1bb8c2cbcc956abedd944186369d718-init/diff:/var/lib/docker/overlay2/298df2ba9683a73d350c1b6c983da9f2b87e35cf844050b5b24d44ff0e84e14d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/eb19350948b954d3f3cef2ec88b1b9dcc1bb8c2cbcc956abedd944186369d718/merged",
	                "UpperDir": "/var/lib/docker/overlay2/eb19350948b954d3f3cef2ec88b1b9dcc1bb8c2cbcc956abedd944186369d718/diff",
	                "WorkDir": "/var/lib/docker/overlay2/eb19350948b954d3f3cef2ec88b1b9dcc1bb8c2cbcc956abedd944186369d718/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-199910",
	                "Source": "/var/lib/docker/volumes/functional-199910/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-199910",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-199910",
	                "name.minikube.sigs.k8s.io": "functional-199910",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f968325eef651f67db13113c73a3310ee76a7c88af5a211cc222343e85ee43d1",
	            "SandboxKey": "/var/run/docker/netns/f968325eef65",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33159"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33160"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33163"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33161"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-199910": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:65:7a:15:4b:c7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7d66feb0971ca31aa50fbd8d10400dca354f44739c3efb8d06e897cb43ffc6b4",
	                    "EndpointID": "50a6d2ecb2dfd2667a6e29fd9b2eea174bfafa2a4794ff1c52dc85ca797a6a00",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-199910",
	                        "a129060d7e93"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-199910 -n functional-199910
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-199910 logs -n 25: (1.121766201s)
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                               ARGS                                                                │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount     │ -p functional-199910 /tmp/TestFunctionalparallelMountCmdany-port999403763/001:/mount-9p --alsologtostderr -v=1                    │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │                     │
	│ ssh       │ functional-199910 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │                     │
	│ ssh       │ functional-199910 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │ 02 Oct 25 06:19 UTC │
	│ ssh       │ functional-199910 ssh -- ls -la /mount-9p                                                                                         │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │ 02 Oct 25 06:19 UTC │
	│ ssh       │ functional-199910 ssh cat /mount-9p/test-1759385943040280221                                                                      │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │ 02 Oct 25 06:19 UTC │
	│ ssh       │ functional-199910 ssh stat /mount-9p/created-by-test                                                                              │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │ 02 Oct 25 06:19 UTC │
	│ ssh       │ functional-199910 ssh stat /mount-9p/created-by-pod                                                                               │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │ 02 Oct 25 06:19 UTC │
	│ ssh       │ functional-199910 ssh sudo umount -f /mount-9p                                                                                    │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │ 02 Oct 25 06:19 UTC │
	│ ssh       │ functional-199910 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │                     │
	│ mount     │ -p functional-199910 /tmp/TestFunctionalparallelMountCmdspecific-port2307594671/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │                     │
	│ ssh       │ functional-199910 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │ 02 Oct 25 06:19 UTC │
	│ ssh       │ functional-199910 ssh -- ls -la /mount-9p                                                                                         │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │ 02 Oct 25 06:19 UTC │
	│ ssh       │ functional-199910 ssh sudo umount -f /mount-9p                                                                                    │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │                     │
	│ mount     │ -p functional-199910 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1912355386/001:/mount3 --alsologtostderr -v=1                │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │                     │
	│ ssh       │ functional-199910 ssh findmnt -T /mount1                                                                                          │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │                     │
	│ mount     │ -p functional-199910 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1912355386/001:/mount1 --alsologtostderr -v=1                │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │                     │
	│ mount     │ -p functional-199910 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1912355386/001:/mount2 --alsologtostderr -v=1                │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │                     │
	│ ssh       │ functional-199910 ssh findmnt -T /mount1                                                                                          │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │ 02 Oct 25 06:19 UTC │
	│ ssh       │ functional-199910 ssh findmnt -T /mount2                                                                                          │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │ 02 Oct 25 06:19 UTC │
	│ ssh       │ functional-199910 ssh findmnt -T /mount3                                                                                          │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │ 02 Oct 25 06:19 UTC │
	│ mount     │ -p functional-199910 --kill=true                                                                                                  │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │                     │
	│ start     │ -p functional-199910 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd                   │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │                     │
	│ start     │ -p functional-199910 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd                   │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │                     │
	│ start     │ -p functional-199910 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                             │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-199910 --alsologtostderr -v=1                                                                    │ functional-199910 │ jenkins │ v1.37.0 │ 02 Oct 25 06:19 UTC │                     │
	└───────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 06:19:15
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 06:19:15.254520  429832 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:19:15.254770  429832 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:19:15.254780  429832 out.go:374] Setting ErrFile to fd 2...
	I1002 06:19:15.254792  429832 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:19:15.255023  429832 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-375701/.minikube/bin
	I1002 06:19:15.255645  429832 out.go:368] Setting JSON to false
	I1002 06:19:15.256783  429832 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":7298,"bootTime":1759378657,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 06:19:15.256873  429832 start.go:140] virtualization: kvm guest
	I1002 06:19:15.258505  429832 out.go:179] * [functional-199910] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 06:19:15.259591  429832 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 06:19:15.259597  429832 notify.go:220] Checking for updates...
	I1002 06:19:15.260831  429832 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 06:19:15.262162  429832 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-375701/kubeconfig
	I1002 06:19:15.263266  429832 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-375701/.minikube
	I1002 06:19:15.264267  429832 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 06:19:15.265202  429832 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 06:19:15.266577  429832 config.go:182] Loaded profile config "functional-199910": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1002 06:19:15.267099  429832 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 06:19:15.289007  429832 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I1002 06:19:15.289098  429832 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:19:15.340141  429832 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 06:19:15.330436749 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:19:15.340263  429832 docker.go:318] overlay module found
	I1002 06:19:15.341738  429832 out.go:179] * Using the docker driver based on existing profile
	I1002 06:19:15.342748  429832 start.go:304] selected driver: docker
	I1002 06:19:15.342764  429832 start.go:924] validating driver "docker" against &{Name:functional-199910 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-199910 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:19:15.342901  429832 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 06:19:15.343027  429832 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:19:15.398873  429832 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 06:19:15.389220623 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:19:15.399597  429832 cni.go:84] Creating CNI manager for ""
	I1002 06:19:15.399659  429832 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1002 06:19:15.399708  429832 start.go:348] cluster config:
	{Name:functional-199910 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-199910 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:19:15.401261  429832 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	30b85d3947fb7       56cc512116c8f       34 seconds ago      Exited              mount-munger              0                   dfc668eebccd9       busybox-mount                               default
	f6fa8a43c86fe       5107333e08a87       6 minutes ago       Running             mysql                     0                   070ec94bcd8f3       mysql-5bb876957f-cvvj2                      default
	c2503d55a98b1       c3994bc696102       6 minutes ago       Running             kube-apiserver            0                   c790b8963b464       kube-apiserver-functional-199910            kube-system
	0826543035037       c80c8dbafe7dd       6 minutes ago       Running             kube-controller-manager   2                   b57c704d4fda4       kube-controller-manager-functional-199910   kube-system
	a46012c8d77f4       c80c8dbafe7dd       6 minutes ago       Exited              kube-controller-manager   1                   b57c704d4fda4       kube-controller-manager-functional-199910   kube-system
	cf1bb9911e32d       7dd6aaa1717ab       6 minutes ago       Running             kube-scheduler            1                   054fb86bca056       kube-scheduler-functional-199910            kube-system
	73ee8e97a7860       5f1f5298c888d       6 minutes ago       Running             etcd                      1                   1a30670caae60       etcd-functional-199910                      kube-system
	fdcf1ac8db1d9       6e38f40d628db       6 minutes ago       Running             storage-provisioner       1                   a5747f1c0a73f       storage-provisioner                         kube-system
	f0bafa2f4b2c3       52546a367cc9e       6 minutes ago       Running             coredns                   1                   1893e74468bc9       coredns-66bc5c9577-lfbdz                    kube-system
	b42bb88d18439       fc25172553d79       6 minutes ago       Running             kube-proxy                1                   1e56f086695d1       kube-proxy-6fsg9                            kube-system
	ddd36a4ac2f9f       409467f978b4a       6 minutes ago       Running             kindnet-cni               1                   3b55bb31d621a       kindnet-nlvlv                               kube-system
	ff9cd2a0d98dc       52546a367cc9e       7 minutes ago       Exited              coredns                   0                   1893e74468bc9       coredns-66bc5c9577-lfbdz                    kube-system
	80952a8c29127       6e38f40d628db       7 minutes ago       Exited              storage-provisioner       0                   a5747f1c0a73f       storage-provisioner                         kube-system
	0e6365f8d4553       409467f978b4a       7 minutes ago       Exited              kindnet-cni               0                   3b55bb31d621a       kindnet-nlvlv                               kube-system
	97a61b088ac75       fc25172553d79       7 minutes ago       Exited              kube-proxy                0                   1e56f086695d1       kube-proxy-6fsg9                            kube-system
	08e26663c7e44       5f1f5298c888d       7 minutes ago       Exited              etcd                      0                   1a30670caae60       etcd-functional-199910                      kube-system
	53950e32492bd       7dd6aaa1717ab       7 minutes ago       Exited              kube-scheduler            0                   054fb86bca056       kube-scheduler-functional-199910            kube-system
	
	
	==> containerd <==
	Oct 02 06:19:19 functional-199910 containerd[3842]: time="2025-10-02T06:19:19.303545715Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=12711"
	Oct 02 06:19:19 functional-199910 containerd[3842]: time="2025-10-02T06:19:19.303525032Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 06:19:19 functional-199910 containerd[3842]: time="2025-10-02T06:19:19.304352402Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Oct 02 06:19:19 functional-199910 containerd[3842]: time="2025-10-02T06:19:19.305496976Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 06:19:19 functional-199910 containerd[3842]: time="2025-10-02T06:19:19.879804581Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 06:19:21 functional-199910 containerd[3842]: time="2025-10-02T06:19:21.492502642Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 06:19:21 functional-199910 containerd[3842]: time="2025-10-02T06:19:21.492551443Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11015"
	Oct 02 06:19:32 functional-199910 containerd[3842]: time="2025-10-02T06:19:32.884076783Z" level=info msg="PullImage \"docker.io/nginx:latest\""
	Oct 02 06:19:32 functional-199910 containerd[3842]: time="2025-10-02T06:19:32.885731287Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 06:19:33 functional-199910 containerd[3842]: time="2025-10-02T06:19:33.502968378Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 06:19:35 functional-199910 containerd[3842]: time="2025-10-02T06:19:35.125900752Z" level=error msg="PullImage \"docker.io/nginx:latest\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 06:19:35 functional-199910 containerd[3842]: time="2025-10-02T06:19:35.125950886Z" level=info msg="stop pulling image docker.io/library/nginx:latest: active requests=0, bytes read=10967"
	Oct 02 06:19:35 functional-199910 containerd[3842]: time="2025-10-02T06:19:35.126583140Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Oct 02 06:19:35 functional-199910 containerd[3842]: time="2025-10-02T06:19:35.127766522Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 06:19:35 functional-199910 containerd[3842]: time="2025-10-02T06:19:35.704368133Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 06:19:37 functional-199910 containerd[3842]: time="2025-10-02T06:19:37.324710285Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 06:19:37 functional-199910 containerd[3842]: time="2025-10-02T06:19:37.324758998Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11015"
	Oct 02 06:19:37 functional-199910 containerd[3842]: time="2025-10-02T06:19:37.325451985Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Oct 02 06:19:37 functional-199910 containerd[3842]: time="2025-10-02T06:19:37.326704675Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 06:19:37 functional-199910 containerd[3842]: time="2025-10-02T06:19:37.907074039Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 06:19:39 functional-199910 containerd[3842]: time="2025-10-02T06:19:39.524115267Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 06:19:39 functional-199910 containerd[3842]: time="2025-10-02T06:19:39.524175144Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11047"
	Oct 02 06:19:40 functional-199910 containerd[3842]: time="2025-10-02T06:19:40.884861446Z" level=info msg="PullImage \"docker.io/nginx:alpine\""
	Oct 02 06:19:40 functional-199910 containerd[3842]: time="2025-10-02T06:19:40.886602200Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 06:19:41 functional-199910 containerd[3842]: time="2025-10-02T06:19:41.501163446Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	
	
	==> coredns [f0bafa2f4b2c3011baa87254e1977f39f0be514d931e8c686e86c0aa29d3b6ff] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34140 - 64363 "HINFO IN 6421585372913567829.3122299182694518145. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.036751794s
	
	
	==> coredns [ff9cd2a0d98dc28d87af1e35cad013fc327b6424af6df9d7e63b16213372132f] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58400 - 64231 "HINFO IN 6580085158149520847.7999093773305824839. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.113539547s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-199910
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-199910
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb
	                    minikube.k8s.io/name=functional-199910
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T06_12_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 06:12:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-199910
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 06:19:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 06:19:18 +0000   Thu, 02 Oct 2025 06:12:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 06:19:18 +0000   Thu, 02 Oct 2025 06:12:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 06:19:18 +0000   Thu, 02 Oct 2025 06:12:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 06:19:18 +0000   Thu, 02 Oct 2025 06:12:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-199910
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 26786d65ffe140308153a7ab60e7851e
	  System UUID:                d452a066-3b39-4b2a-bb48-a6d5f3f27351
	  Boot ID:                    928ae711-d7b1-4c1e-8d35-81d1dcf6c7b5
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.28
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-w8zxz                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m51s
	  default                     hello-node-connect-7d85dfc575-6vrx2           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  default                     mysql-5bb876957f-cvvj2                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     6m9s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m7s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  kube-system                 coredns-66bc5c9577-lfbdz                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     7m27s
	  kube-system                 etcd-functional-199910                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7m33s
	  kube-system                 kindnet-nlvlv                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      7m27s
	  kube-system                 kube-apiserver-functional-199910              250m (3%)     0 (0%)      0 (0%)           0 (0%)         6m30s
	  kube-system                 kube-controller-manager-functional-199910     200m (2%)     0 (0%)      0 (0%)           0 (0%)         7m33s
	  kube-system                 kube-proxy-6fsg9                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m27s
	  kube-system                 kube-scheduler-functional-199910              100m (1%)     0 (0%)      0 (0%)           0 (0%)         7m33s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m27s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-4mzf5    0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-vlp7x         0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m26s                  kube-proxy       
	  Normal  Starting                 6m50s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  7m33s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m33s                  kubelet          Node functional-199910 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m33s                  kubelet          Node functional-199910 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m33s                  kubelet          Node functional-199910 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m33s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           7m28s                  node-controller  Node functional-199910 event: Registered Node functional-199910 in Controller
	  Normal  NodeReady                7m16s                  kubelet          Node functional-199910 status is now: NodeReady
	  Normal  Starting                 6m33s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m33s (x8 over 6m33s)  kubelet          Node functional-199910 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m33s (x8 over 6m33s)  kubelet          Node functional-199910 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m33s (x7 over 6m33s)  kubelet          Node functional-199910 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m33s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m27s                  node-controller  Node functional-199910 event: Registered Node functional-199910 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c2 5d 35 94 2e 01 08 06
	[  +0.058144] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 62 ab 58 d0 fd cd 08 06
	[  +7.548229] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4a fd c4 dd c4 6d 08 06
	[Oct 2 05:45] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f6 b4 f2 37 23 6e 08 06
	[  +8.618588] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8e d9 8c 9f 19 f9 08 06
	[  +0.000520] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 4a fd c4 dd c4 6d 08 06
	[  +0.839544] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 42 98 6a 33 ef 13 08 06
	[ +18.414075] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 16 fd ef 12 40 02 08 06
	[  +0.000360] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 42 98 6a 33 ef 13 08 06
	[  +5.829441] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8a 7c 73 c3 88 96 08 06
	[  +0.000311] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 62 ab 58 d0 fd cd 08 06
	[ +15.373470] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 de db d2 97 bd 08 06
	[  +0.000392] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 b4 f2 37 23 6e 08 06
	
	
	==> etcd [08e26663c7e447c1795a392880992c67b7efa0c467f1cc535f872ec73d63ad38] <==
	{"level":"warn","ts":"2025-10-02T06:12:05.810669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:12:05.816571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:12:05.822847Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:12:05.835996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:12:05.842577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:12:05.849081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:12:05.894403Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57004","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T06:12:50.300585Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-02T06:12:50.300664Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-199910","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-02T06:12:50.300764Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-02T06:12:57.302505Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-02T06:12:57.302730Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T06:12:57.303059Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T06:12:57.303085Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T06:12:57.302839Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-02T06:12:57.303142Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-02T06:12:57.303158Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"error","ts":"2025-10-02T06:12:57.302240Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"warn","ts":"2025-10-02T06:12:57.303485Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T06:12:57.303507Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T06:12:57.303516Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T06:12:57.304975Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-02T06:12:57.305029Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T06:12:57.305061Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-02T06:12:57.305075Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-199910","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [73ee8e97a78602fac85e902429f6351307dfd75d4e432518b658ad78e1e9d2b1] <==
	{"level":"warn","ts":"2025-10-02T06:13:10.304308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.312841Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.318810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.328017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.335433Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.342042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.352795Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50930","server-name":"","error":"read tcp 127.0.0.1:2379->127.0.0.1:50930: read: connection reset by peer"}
	{"level":"warn","ts":"2025-10-02T06:13:10.360606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.367410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.373004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.379059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.384978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.391903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.398291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.404172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.410209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.415890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.421667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.427687Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.433534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.448100Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.451132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.456833Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.462331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:13:10.511748Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51280","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 06:19:41 up  2:02,  0 user,  load average: 0.32, 0.53, 0.78
	Linux functional-199910 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0e6365f8d4553ed3a43b91cd235f10a301c95e3ea2ce0b800d81621f382c5540] <==
	I1002 06:12:15.314795       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 06:12:15.315055       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1002 06:12:15.315240       1 main.go:148] setting mtu 1500 for CNI 
	I1002 06:12:15.315261       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 06:12:15.315278       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T06:12:15Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 06:12:15.436330       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 06:12:15.436387       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 06:12:15.436773       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 06:12:15.614228       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1002 06:12:15.914278       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 06:12:15.914307       1 metrics.go:72] Registering metrics
	I1002 06:12:15.914403       1 controller.go:711] "Syncing nftables rules"
	I1002 06:12:25.437165       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:12:25.437244       1 main.go:301] handling current node
	I1002 06:12:35.444020       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:12:35.444049       1 main.go:301] handling current node
	I1002 06:12:45.439055       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:12:45.439091       1 main.go:301] handling current node
	
	
	==> kindnet [ddd36a4ac2f9f0eb8d3a5fb2bf60cae5868d293c2b8e98bf9f4f9f13c884ba40] <==
	I1002 06:17:41.335106       1 main.go:301] handling current node
	I1002 06:17:51.335131       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:17:51.335170       1 main.go:301] handling current node
	I1002 06:18:01.339696       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:18:01.339734       1 main.go:301] handling current node
	I1002 06:18:11.332797       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:18:11.332833       1 main.go:301] handling current node
	I1002 06:18:21.334812       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:18:21.334841       1 main.go:301] handling current node
	I1002 06:18:31.341080       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:18:31.341119       1 main.go:301] handling current node
	I1002 06:18:41.336311       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:18:41.336354       1 main.go:301] handling current node
	I1002 06:18:51.334292       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:18:51.334329       1 main.go:301] handling current node
	I1002 06:19:01.331719       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:19:01.331756       1 main.go:301] handling current node
	I1002 06:19:11.334018       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:19:11.334056       1 main.go:301] handling current node
	I1002 06:19:21.332143       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:19:21.332200       1 main.go:301] handling current node
	I1002 06:19:31.340286       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:19:31.340327       1 main.go:301] handling current node
	I1002 06:19:41.335017       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:19:41.335046       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c2503d55a98b117f95312339e0408d46140df26986746e655ee44ca4b17d1543] <==
	I1002 06:13:10.968313       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1002 06:13:10.968328       1 cache.go:39] Caches are synced for autoregister controller
	I1002 06:13:10.973719       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 06:13:11.003861       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 06:13:11.864528       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 06:13:12.011502       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W1002 06:13:12.168303       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1002 06:13:12.169465       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 06:13:12.174164       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 06:13:12.718173       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1002 06:13:12.803056       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 06:13:12.845127       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 06:13:12.850799       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 06:13:27.458495       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.96.42.200"}
	I1002 06:13:32.001334       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.106.74.230"}
	I1002 06:13:32.038247       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1002 06:13:34.269456       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.98.97.18"}
	I1002 06:13:39.536669       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.105.16.67"}
	E1002 06:13:47.153716       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:57876: use of closed network connection
	E1002 06:13:48.672727       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:57894: use of closed network connection
	E1002 06:13:50.778146       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:50950: use of closed network connection
	I1002 06:13:50.901947       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.100.248.3"}
	I1002 06:19:16.194630       1 controller.go:667] quota admission added evaluator for: namespaces
	I1002 06:19:16.288521       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.174.148"}
	I1002 06:19:16.299708       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.133.92"}
	
	
	==> kube-controller-manager [0826543035037388cfadc1a20ffaeab10d0bd916e3a71611034dc156659ba3d3] <==
	I1002 06:13:14.311871       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1002 06:13:14.311885       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1002 06:13:14.312075       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1002 06:13:14.313120       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1002 06:13:14.313147       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1002 06:13:14.313172       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1002 06:13:14.313212       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1002 06:13:14.313272       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1002 06:13:14.313932       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1002 06:13:14.313956       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1002 06:13:14.316028       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1002 06:13:14.318325       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1002 06:13:14.318470       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 06:13:14.321892       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1002 06:13:14.323622       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 06:13:14.323638       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 06:13:14.323648       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1002 06:13:14.325834       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1002 06:13:14.331979       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1002 06:19:16.235527       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 06:19:16.239715       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 06:19:16.242312       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 06:19:16.244051       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 06:19:16.245736       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 06:19:16.250431       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [a46012c8d77f4d59e1edfd2706d75b7ed32740ed7840142c8b6a163ad8125ce4] <==
	I1002 06:12:58.441929       1 serving.go:386] Generated self-signed cert in-memory
	I1002 06:12:59.039518       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1002 06:12:59.039550       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 06:12:59.041673       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1002 06:12:59.041844       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1002 06:12:59.046004       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1002 06:12:59.046753       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 06:12:59.057653       1 controllermanager.go:781] "Started controller" controller="serviceaccount-token-controller"
	I1002 06:12:59.057945       1 shared_informer.go:349] "Waiting for caches to sync" controller="tokens"
	I1002 06:13:00.446798       1 controllermanager.go:781] "Started controller" controller="serviceaccount-controller"
	I1002 06:13:00.446836       1 serviceaccounts_controller.go:114] "Starting service account controller" logger="serviceaccount-controller"
	I1002 06:13:00.446852       1 shared_informer.go:349] "Waiting for caches to sync" controller="service account"
	F1002 06:13:00.447274       1 client_builder_dynamic.go:154] Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/serviceaccounts/pvc-protection-controller": dial tcp 192.168.49.2:8441: connect: connection refused
	
	
	==> kube-proxy [97a61b088ac75deb72639eca5c93e3931e2d507d4d6d431bcb874a08c79f4fd8] <==
	I1002 06:12:14.839850       1 server_linux.go:53] "Using iptables proxy"
	I1002 06:12:14.903452       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 06:12:15.004159       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 06:12:15.004210       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1002 06:12:15.004324       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 06:12:15.023563       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 06:12:15.023618       1 server_linux.go:132] "Using iptables Proxier"
	I1002 06:12:15.029815       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 06:12:15.030570       1 server.go:527] "Version info" version="v1.34.1"
	I1002 06:12:15.030605       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 06:12:15.032931       1 config.go:200] "Starting service config controller"
	I1002 06:12:15.032981       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 06:12:15.033021       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 06:12:15.032994       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 06:12:15.033046       1 config.go:106] "Starting endpoint slice config controller"
	I1002 06:12:15.033079       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 06:12:15.033118       1 config.go:309] "Starting node config controller"
	I1002 06:12:15.033128       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 06:12:15.033134       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 06:12:15.133970       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 06:12:15.133970       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1002 06:12:15.133989       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [b42bb88d18439d6670f0d6210dbbdaf5fd87083935495b10801ef7cf68b6f13a] <==
	I1002 06:12:50.973246       1 server_linux.go:53] "Using iptables proxy"
	I1002 06:12:51.050576       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 06:12:51.151332       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 06:12:51.151368       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1002 06:12:51.151463       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 06:12:51.172896       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 06:12:51.172973       1 server_linux.go:132] "Using iptables Proxier"
	I1002 06:12:51.178306       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 06:12:51.178665       1 server.go:527] "Version info" version="v1.34.1"
	I1002 06:12:51.178703       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 06:12:51.179784       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 06:12:51.179811       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 06:12:51.179813       1 config.go:200] "Starting service config controller"
	I1002 06:12:51.179828       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 06:12:51.179858       1 config.go:106] "Starting endpoint slice config controller"
	I1002 06:12:51.179864       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 06:12:51.179865       1 config.go:309] "Starting node config controller"
	I1002 06:12:51.179883       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 06:12:51.179890       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 06:12:51.280911       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1002 06:12:51.280956       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 06:12:51.280964       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [53950e32492bd26e7a71310b1a5140df8125ff1d730bb46b83d84ae621fd3298] <==
	E1002 06:12:06.276119       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 06:12:06.276159       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 06:12:06.276232       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 06:12:06.276228       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 06:12:06.276252       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 06:12:06.276317       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 06:12:06.276332       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 06:12:06.276345       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 06:12:06.276383       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 06:12:06.276398       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 06:12:07.085724       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 06:12:07.157911       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 06:12:07.356949       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 06:12:07.371978       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 06:12:07.427138       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 06:12:07.458065       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 06:12:07.475022       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 06:12:07.488961       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	I1002 06:12:07.672968       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 06:12:57.410262       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1002 06:12:57.410597       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 06:12:57.410616       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1002 06:12:57.410653       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1002 06:12:57.410701       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1002 06:12:57.410723       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [cf1bb9911e32d0b099e600db16880966f3a50443fd65c345cbbf32cb28da5b5a] <==
	I1002 06:12:58.351746       1 serving.go:386] Generated self-signed cert in-memory
	I1002 06:12:59.028789       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 06:12:59.028819       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 06:12:59.033800       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 06:12:59.033904       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1002 06:12:59.033938       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1002 06:12:59.033975       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 06:12:59.034768       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 06:12:59.034785       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 06:12:59.034808       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 06:12:59.035924       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 06:12:59.134547       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1002 06:12:59.135726       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 06:12:59.136328       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	E1002 06:13:10.905328       1 reflector.go:205] "Failed to watch" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 06:13:10.905345       1 reflector.go:205] "Failed to watch" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 06:13:10.905353       1 reflector.go:205] "Failed to watch" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 06:13:10.905408       1 reflector.go:205] "Failed to watch" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 06:13:10.905328       1 reflector.go:205] "Failed to watch" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	
	
	==> kubelet <==
	Oct 02 06:19:19 functional-199910 kubelet[5002]: E1002 06:19:19.303900    5002 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 02 06:19:19 functional-199910 kubelet[5002]: E1002 06:19:19.303966    5002 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 02 06:19:19 functional-199910 kubelet[5002]: E1002 06:19:19.304184    5002 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-77bf4d6c4c-4mzf5_kubernetes-dashboard(ca2b87c9-5e0e-4d06-9349-7f8d328f5c5d): ErrImagePull: failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 02 06:19:19 functional-199910 kubelet[5002]: E1002 06:19:19.304251    5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-4mzf5" podUID="ca2b87c9-5e0e-4d06-9349-7f8d328f5c5d"
	Oct 02 06:19:19 functional-199910 kubelet[5002]: E1002 06:19:19.730153    5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-4mzf5" podUID="ca2b87c9-5e0e-4d06-9349-7f8d
328f5c5d"
	Oct 02 06:19:21 functional-199910 kubelet[5002]: E1002 06:19:21.492816    5002 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 02 06:19:21 functional-199910 kubelet[5002]: E1002 06:19:21.492873    5002 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 02 06:19:21 functional-199910 kubelet[5002]: E1002 06:19:21.493009    5002 kuberuntime_manager.go:1449] "Unhandled Error" err="container kubernetes-dashboard start failed in pod kubernetes-dashboard-855c9754f9-vlp7x_kubernetes-dashboard(8c621264-b906-43b5-8018-d110a9711c8c): ErrImagePull: failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 02 06:19:21 functional-199910 kubelet[5002]: E1002 06:19:21.493064    5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vlp7x" podUID="8c621264-b906-43b5-8018-d110a9711c8c"
	Oct 02 06:19:21 functional-199910 kubelet[5002]: E1002 06:19:21.734741    5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vlp7x" podUID="8c621264-b906-43b5-8018-d110a9711c8c"
	Oct 02 06:19:26 functional-199910 kubelet[5002]: E1002 06:19:26.884339    5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="d446da26-c4e3-4120-8392-398030fe9f55"
	Oct 02 06:19:29 functional-199910 kubelet[5002]: E1002 06:19:29.883460    5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-6vrx2" podUID="540c3ceb-ae4f-4dc0-b99d-354efb31c102"
	Oct 02 06:19:31 functional-199910 kubelet[5002]: E1002 06:19:31.883501    5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-w8zxz" podUID="952a9830-06ce-4f87-9055-b90e32fb9805"
	Oct 02 06:19:35 functional-199910 kubelet[5002]: E1002 06:19:35.126142    5002 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 02 06:19:35 functional-199910 kubelet[5002]: E1002 06:19:35.126194    5002 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 02 06:19:35 functional-199910 kubelet[5002]: E1002 06:19:35.126385    5002 kuberuntime_manager.go:1449] "Unhandled Error" err="container myfrontend start failed in pod sp-pod_default(30225e93-f680-46a4-ad8c-b4adbd692c1f): ErrImagePull: failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 02 06:19:35 functional-199910 kubelet[5002]: E1002 06:19:35.126449    5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="30225e93-f680-46a4-ad8c-b4adbd692c1f"
	Oct 02 06:19:37 functional-199910 kubelet[5002]: E1002 06:19:37.325017    5002 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 02 06:19:37 functional-199910 kubelet[5002]: E1002 06:19:37.325076    5002 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 02 06:19:37 functional-199910 kubelet[5002]: E1002 06:19:37.325303    5002 kuberuntime_manager.go:1449] "Unhandled Error" err="container kubernetes-dashboard start failed in pod kubernetes-dashboard-855c9754f9-vlp7x_kubernetes-dashboard(8c621264-b906-43b5-8018-d110a9711c8c): ErrImagePull: failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 02 06:19:37 functional-199910 kubelet[5002]: E1002 06:19:37.325374    5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vlp7x" podUID="8c621264-b906-43b5-8018-d110a9711c8c"
	Oct 02 06:19:39 functional-199910 kubelet[5002]: E1002 06:19:39.524429    5002 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 02 06:19:39 functional-199910 kubelet[5002]: E1002 06:19:39.524486    5002 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 02 06:19:39 functional-199910 kubelet[5002]: E1002 06:19:39.524575    5002 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-77bf4d6c4c-4mzf5_kubernetes-dashboard(ca2b87c9-5e0e-4d06-9349-7f8d328f5c5d): ErrImagePull: failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 02 06:19:39 functional-199910 kubelet[5002]: E1002 06:19:39.524609    5002 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-4mzf5" podUID="ca2b87c9-5e0e-4d06-9349-7f8d328f5c5d"
	
	
	==> storage-provisioner [80952a8c291275da5e35ad70882231fdd9a7d83825c994efd21bb4f51557d477] <==
	I1002 06:12:26.077595       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-199910_0d7cdd1d-c7b6-42fc-bd8c-cd81e39b8dd7!
	W1002 06:12:27.985165       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:27.988636       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:29.991413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:29.995879       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:31.999091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:32.003058       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:34.006471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:34.010034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:36.013002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:36.017547       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:38.019738       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:38.023243       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:40.026764       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:40.030794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:42.033995       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:42.037680       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:44.041227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:44.046725       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:46.050103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:46.053714       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:48.056100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:48.059659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:50.063069       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:12:50.067736       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [fdcf1ac8db1d92b12e319a16fb30394c6f6dce3eecf4e5c46c0c0f15efea87df] <==
	W1002 06:19:16.544747       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:19:18.547715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:19:18.552002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:19:20.554724       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:19:20.558086       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:19:22.560892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:19:22.564576       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:19:24.566865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:19:24.570388       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:19:26.572863       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:19:26.577359       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:19:28.580051       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:19:28.583370       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:19:30.586002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:19:30.590516       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:19:32.593070       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:19:32.596834       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:19:34.598850       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:19:34.603059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:19:36.606241       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:19:36.609698       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:19:38.611876       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:19:38.616588       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:19:40.619902       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:19:40.624889       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-199910 -n functional-199910
helpers_test.go:269: (dbg) Run:  kubectl --context functional-199910 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-w8zxz hello-node-connect-7d85dfc575-6vrx2 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-4mzf5 kubernetes-dashboard-855c9754f9-vlp7x
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-199910 describe pod busybox-mount hello-node-75c85bcc94-w8zxz hello-node-connect-7d85dfc575-6vrx2 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-4mzf5 kubernetes-dashboard-855c9754f9-vlp7x
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-199910 describe pod busybox-mount hello-node-75c85bcc94-w8zxz hello-node-connect-7d85dfc575-6vrx2 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-4mzf5 kubernetes-dashboard-855c9754f9-vlp7x: exit status 1 (90.82675ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-199910/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 06:19:04 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  containerd://30b85d3947fb716365efd7ebc1d9aa1ae0a31acc10a239c45a439219b7aacac2
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Thu, 02 Oct 2025 06:19:06 +0000
	      Finished:     Thu, 02 Oct 2025 06:19:06 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5gvnd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-5gvnd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  38s   default-scheduler  Successfully assigned default/busybox-mount to functional-199910
	  Normal  Pulling    38s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     36s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.042s (2.042s including waiting). Image size: 2395207 bytes.
	  Normal  Created    36s   kubelet            Created container: mount-munger
	  Normal  Started    36s   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-w8zxz
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-199910/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 06:13:50 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hfndz (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-hfndz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  5m51s                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-w8zxz to functional-199910
	  Warning  Failed     5m34s                  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m50s (x5 over 5m51s)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     2m47s (x4 over 5m49s)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m47s (x5 over 5m49s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    39s (x21 over 5m48s)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     39s (x21 over 5m48s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-6vrx2
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-199910/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 06:13:39 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kt2br (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-kt2br:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  6m3s                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-6vrx2 to functional-199910
	  Normal   Pulling    2m53s (x5 over 6m3s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     2m50s (x5 over 5m57s)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m50s (x5 over 5m57s)  kubelet            Error: ErrImagePull
	  Warning  Failed     51s (x19 over 5m56s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    28s (x21 over 5m56s)   kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-199910/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 06:13:34 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j2hd7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-j2hd7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  6m8s                   default-scheduler  Successfully assigned default/nginx-svc to functional-199910
	  Warning  Failed     5m59s                  kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m56s (x5 over 6m8s)   kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     2m53s (x5 over 5m59s)  kubelet            Error: ErrImagePull
	  Warning  Failed     2m53s (x4 over 5m43s)  kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     54s (x19 over 5m59s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    28s (x21 over 5m59s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-199910/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 06:13:40 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6brnk (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-6brnk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  6m2s                   default-scheduler  Successfully assigned default/sp-pod to functional-199910
	  Warning  Failed     4m23s (x4 over 5m55s)  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m2s (x5 over 6m2s)    kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     2m59s (x5 over 5m55s)  kubelet            Error: ErrImagePull
	  Warning  Failed     2m59s                  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:17ae566734b63632e543c907ba74757e0c1a25d812ab9f10a07a6bed98dd199c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     51s (x19 over 5m54s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    24s (x21 over 5m54s)   kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-4mzf5" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-vlp7x" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-199910 describe pod busybox-mount hello-node-75c85bcc94-w8zxz hello-node-connect-7d85dfc575-6vrx2 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-4mzf5 kubernetes-dashboard-855c9754f9-vlp7x: exit status 1
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (368.73s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-199910 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [d446da26-c4e3-4120-8392-398030fe9f55] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
functional_test_tunnel_test.go:216: ***** TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: pod "run=nginx-svc" failed to start within 4m0s: context deadline exceeded ****
functional_test_tunnel_test.go:216: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-199910 -n functional-199910
functional_test_tunnel_test.go:216: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: showing logs for failed pods as of 2025-10-02 06:17:34.583667839 +0000 UTC m=+753.197291572
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-199910 describe po nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) kubectl --context functional-199910 describe po nginx-svc -n default:
Name:             nginx-svc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-199910/192.168.49.2
Start Time:       Thu, 02 Oct 2025 06:13:34 +0000
Labels:           run=nginx-svc
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:  10.244.0.5
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j2hd7 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-j2hd7:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  4m                   default-scheduler  Successfully assigned default/nginx-svc to functional-199910
Warning  Failed     3m51s                kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    48s (x5 over 4m)     kubelet            Pulling image "docker.io/nginx:alpine"
Warning  Failed     45s (x5 over 3m51s)  kubelet            Error: ErrImagePull
Warning  Failed     45s (x4 over 3m35s)  kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    6s (x13 over 3m51s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     6s (x13 over 3m51s)  kubelet            Error: ImagePullBackOff
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-199910 logs nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) Non-zero exit: kubectl --context functional-199910 logs nginx-svc -n default: exit status 1 (65.629557ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx-svc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:216: kubectl --context functional-199910 logs nginx-svc -n default: exit status 1
functional_test_tunnel_test.go:217: wait: run=nginx-svc within 4m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-199910 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-199910 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-w8zxz" [952a9830-06ce-4f87-9055-b90e32fb9805] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1002 06:13:53.493013  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/addons-304257/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:14:34.454837  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/addons-304257/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:15:56.376679  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/addons-304257/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-199910 -n functional-199910
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-10-02 06:23:51.206885976 +0000 UTC m=+1129.820509708
functional_test.go:1460: (dbg) Run:  kubectl --context functional-199910 describe po hello-node-75c85bcc94-w8zxz -n default
functional_test.go:1460: (dbg) kubectl --context functional-199910 describe po hello-node-75c85bcc94-w8zxz -n default:
Name:             hello-node-75c85bcc94-w8zxz
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-199910/192.168.49.2
Start Time:       Thu, 02 Oct 2025 06:13:50 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hfndz (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-hfndz:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-75c85bcc94-w8zxz to functional-199910
Warning  Failed     9m43s                   kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    6m59s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m56s (x4 over 9m58s)   kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     6m56s (x5 over 9m58s)   kubelet            Error: ErrImagePull
Normal   BackOff    4m48s (x21 over 9m57s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m48s (x21 over 9m57s)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-199910 logs hello-node-75c85bcc94-w8zxz -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-199910 logs hello-node-75c85bcc94-w8zxz -n default: exit status 1 (59.956652ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-w8zxz" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-199910 logs hello-node-75c85bcc94-w8zxz -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.55s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (87.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
I1002 06:17:34.711364  379278 retry.go:31] will retry after 1.896733001s: Temporary Error: Get "http:": http: no Host in request URL
I1002 06:17:36.608854  379278 retry.go:31] will retry after 6.741205186s: Temporary Error: Get "http:": http: no Host in request URL
I1002 06:17:43.350337  379278 retry.go:31] will retry after 8.052889361s: Temporary Error: Get "http:": http: no Host in request URL
I1002 06:17:51.404022  379278 retry.go:31] will retry after 14.963982333s: Temporary Error: Get "http:": http: no Host in request URL
I1002 06:18:06.368717  379278 retry.go:31] will retry after 10.867373022s: Temporary Error: Get "http:": http: no Host in request URL
E1002 06:18:12.517292  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/addons-304257/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1002 06:18:17.236893  379278 retry.go:31] will retry after 14.387140512s: Temporary Error: Get "http:": http: no Host in request URL
I1002 06:18:31.624547  379278 retry.go:31] will retry after 30.11708952s: Temporary Error: Get "http:": http: no Host in request URL
E1002 06:18:40.218614  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/addons-304257/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-199910 get svc nginx-svc
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
NAME        TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
nginx-svc   LoadBalancer   10.98.97.18   10.98.97.18   80:31415/TCP   5m27s
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (87.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-199910 service --namespace=default --https --url hello-node: exit status 115 (517.954146ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:32043
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-199910 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-199910 service hello-node --url --format={{.IP}}: exit status 115 (515.557019ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-199910 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-199910 service hello-node --url: exit status 115 (515.536837ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:32043
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-199910 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32043
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                    

Test pass (297/331)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 14.37
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.06
9 TestDownloadOnly/v1.28.0/DeleteAll 0.2
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.1/json-events 11.29
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.06
18 TestDownloadOnly/v1.34.1/DeleteAll 0.2
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.12
20 TestDownloadOnlyKic 0.36
21 TestBinaryMirror 0.77
22 TestOffline 54.22
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 163.23
29 TestAddons/serial/Volcano 40.19
31 TestAddons/serial/GCPAuth/Namespaces 0.11
32 TestAddons/serial/GCPAuth/FakeCredentials 9.45
35 TestAddons/parallel/Registry 15.22
36 TestAddons/parallel/RegistryCreds 0.88
37 TestAddons/parallel/Ingress 20.2
38 TestAddons/parallel/InspektorGadget 5.26
39 TestAddons/parallel/MetricsServer 5.8
41 TestAddons/parallel/CSI 44.85
42 TestAddons/parallel/Headlamp 18.67
43 TestAddons/parallel/CloudSpanner 5.5
44 TestAddons/parallel/LocalPath 10.1
45 TestAddons/parallel/NvidiaDevicePlugin 5.49
46 TestAddons/parallel/Yakd 10.68
47 TestAddons/parallel/AmdGpuDevicePlugin 5.47
48 TestAddons/StoppedEnableDisable 12.46
49 TestCertOptions 27.2
50 TestCertExpiration 213.39
52 TestForceSystemdFlag 27.23
53 TestForceSystemdEnv 27.51
54 TestDockerEnvContainerd 36.47
55 TestKVMDriverInstallOrUpdate 0.69
59 TestErrorSpam/setup 18.88
60 TestErrorSpam/start 0.62
61 TestErrorSpam/status 0.91
62 TestErrorSpam/pause 1.41
63 TestErrorSpam/unpause 1.48
64 TestErrorSpam/stop 11.95
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 38.28
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 6.01
71 TestFunctional/serial/KubeContext 0.04
72 TestFunctional/serial/KubectlGetPods 0.06
75 TestFunctional/serial/CacheCmd/cache/add_remote 2.5
76 TestFunctional/serial/CacheCmd/cache/add_local 1.81
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.27
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.46
81 TestFunctional/serial/CacheCmd/cache/delete 0.1
82 TestFunctional/serial/MinikubeKubectlCmd 0.11
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
84 TestFunctional/serial/ExtraConfig 43.49
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.17
87 TestFunctional/serial/LogsFileCmd 1.16
88 TestFunctional/serial/InvalidService 3.98
90 TestFunctional/parallel/ConfigCmd 0.34
92 TestFunctional/parallel/DryRun 0.34
93 TestFunctional/parallel/InternationalLanguage 0.15
94 TestFunctional/parallel/StatusCmd 0.9
99 TestFunctional/parallel/AddonsCmd 0.14
102 TestFunctional/parallel/SSHCmd 0.59
103 TestFunctional/parallel/CpCmd 1.69
104 TestFunctional/parallel/MySQL 18.91
105 TestFunctional/parallel/FileSync 0.28
106 TestFunctional/parallel/CertSync 1.73
110 TestFunctional/parallel/NodeLabels 0.07
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.58
114 TestFunctional/parallel/License 0.31
115 TestFunctional/parallel/Version/short 0.05
116 TestFunctional/parallel/Version/components 0.46
117 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
121 TestFunctional/parallel/ImageCommands/ImageBuild 3.2
122 TestFunctional/parallel/ImageCommands/Setup 1.73
123 TestFunctional/parallel/UpdateContextCmd/no_changes 0.13
124 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.12
125 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.12
126 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.2
128 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.43
129 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
132 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.22
133 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.96
134 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.36
135 TestFunctional/parallel/ImageCommands/ImageRemove 0.47
136 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.64
137 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.4
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
144 TestFunctional/parallel/ProfileCmd/profile_not_create 0.4
145 TestFunctional/parallel/ProfileCmd/profile_list 0.37
146 TestFunctional/parallel/ProfileCmd/profile_json_output 0.37
147 TestFunctional/parallel/MountCmd/any-port 7.37
148 TestFunctional/parallel/MountCmd/specific-port 1.82
149 TestFunctional/parallel/MountCmd/VerifyCleanup 1.79
150 TestFunctional/parallel/ServiceCmd/List 1.69
151 TestFunctional/parallel/ServiceCmd/JSONOutput 1.68
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 116.68
163 TestMultiControlPlane/serial/DeployApp 5.06
164 TestMultiControlPlane/serial/PingHostFromPods 1.08
165 TestMultiControlPlane/serial/AddWorkerNode 24.71
166 TestMultiControlPlane/serial/NodeLabels 0.06
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.87
168 TestMultiControlPlane/serial/CopyFile 16.3
169 TestMultiControlPlane/serial/StopSecondaryNode 12.55
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.69
171 TestMultiControlPlane/serial/RestartSecondaryNode 9.04
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.87
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 93.16
174 TestMultiControlPlane/serial/DeleteSecondaryNode 9.06
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.68
176 TestMultiControlPlane/serial/StopCluster 35.71
177 TestMultiControlPlane/serial/RestartCluster 55.9
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.69
179 TestMultiControlPlane/serial/AddSecondaryNode 39.89
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.9
184 TestJSONOutput/start/Command 37.8
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/pause/Command 0.62
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/unpause/Command 0.56
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 5.7
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.2
209 TestKicCustomNetwork/create_custom_network 33.12
210 TestKicCustomNetwork/use_default_bridge_network 23.45
211 TestKicExistingNetwork 23.42
212 TestKicCustomSubnet 23.14
213 TestKicStaticIP 26.45
214 TestMainNoArgs 0.05
215 TestMinikubeProfile 46.09
218 TestMountStart/serial/StartWithMountFirst 5.92
219 TestMountStart/serial/VerifyMountFirst 0.25
220 TestMountStart/serial/StartWithMountSecond 5.66
221 TestMountStart/serial/VerifyMountSecond 0.25
222 TestMountStart/serial/DeleteFirst 1.63
223 TestMountStart/serial/VerifyMountPostDelete 0.25
224 TestMountStart/serial/Stop 1.19
225 TestMountStart/serial/RestartStopped 7.05
226 TestMountStart/serial/VerifyMountPostStop 0.25
229 TestMultiNode/serial/FreshStart2Nodes 66.11
230 TestMultiNode/serial/DeployApp2Nodes 4.27
231 TestMultiNode/serial/PingHostFrom2Pods 0.71
232 TestMultiNode/serial/AddNode 23.62
233 TestMultiNode/serial/MultiNodeLabels 0.06
234 TestMultiNode/serial/ProfileList 0.64
235 TestMultiNode/serial/CopyFile 9.27
236 TestMultiNode/serial/StopNode 2.19
237 TestMultiNode/serial/StartAfterStop 6.71
238 TestMultiNode/serial/RestartKeepsNodes 69.72
239 TestMultiNode/serial/DeleteNode 5.05
240 TestMultiNode/serial/StopMultiNode 23.79
241 TestMultiNode/serial/RestartMultiNode 50.03
242 TestMultiNode/serial/ValidateNameConflict 21.53
247 TestPreload 111.54
249 TestScheduledStopUnix 95.53
252 TestInsufficientStorage 11.95
253 TestRunningBinaryUpgrade 45.36
255 TestKubernetesUpgrade 314.79
256 TestMissingContainerUpgrade 116.3
257 TestStoppedBinaryUpgrade/Setup 2.63
260 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
267 TestNoKubernetes/serial/StartWithK8s 35.36
268 TestStoppedBinaryUpgrade/Upgrade 96.9
269 TestNoKubernetes/serial/StartWithStopK8s 26.34
270 TestNoKubernetes/serial/Start 4.73
271 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
272 TestNoKubernetes/serial/ProfileList 30.14
273 TestNoKubernetes/serial/Stop 1.21
274 TestNoKubernetes/serial/StartNoArgs 6.89
275 TestStoppedBinaryUpgrade/MinikubeLogs 1.03
277 TestPause/serial/Start 40.75
278 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
279 TestPause/serial/SecondStartNoReconfiguration 6.16
287 TestNetworkPlugins/group/false 3.57
288 TestPause/serial/Pause 0.73
289 TestPause/serial/VerifyStatus 0.36
290 TestPause/serial/Unpause 0.72
294 TestPause/serial/PauseAgain 0.8
295 TestPause/serial/DeletePaused 2.73
296 TestPause/serial/VerifyDeletedResources 0.49
298 TestStartStop/group/old-k8s-version/serial/FirstStart 48.91
300 TestStartStop/group/embed-certs/serial/FirstStart 41.89
301 TestStartStop/group/embed-certs/serial/DeployApp 9.23
302 TestStartStop/group/old-k8s-version/serial/DeployApp 9.24
303 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.79
304 TestStartStop/group/embed-certs/serial/Stop 11.91
305 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.88
306 TestStartStop/group/old-k8s-version/serial/Stop 11.89
307 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
308 TestStartStop/group/embed-certs/serial/SecondStart 51.06
309 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
310 TestStartStop/group/old-k8s-version/serial/SecondStart 44.23
311 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
312 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
313 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
314 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
315 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
316 TestStartStop/group/old-k8s-version/serial/Pause 2.71
317 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
318 TestStartStop/group/embed-certs/serial/Pause 2.85
320 TestStartStop/group/no-preload/serial/FirstStart 50.33
322 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 43.53
323 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.32
324 TestStartStop/group/no-preload/serial/DeployApp 10.3
326 TestStartStop/group/newest-cni/serial/FirstStart 26.8
327 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.91
328 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.1
329 TestNetworkPlugins/group/auto/Start 42.72
330 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.93
331 TestStartStop/group/no-preload/serial/Stop 12.01
332 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.17
333 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 44.91
334 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.34
335 TestStartStop/group/no-preload/serial/SecondStart 55.4
336 TestStartStop/group/newest-cni/serial/DeployApp 0
337 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.8
338 TestStartStop/group/newest-cni/serial/Stop 2.04
339 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.32
340 TestStartStop/group/newest-cni/serial/SecondStart 11.91
341 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
343 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
344 TestStartStop/group/newest-cni/serial/Pause 2.9
345 TestNetworkPlugins/group/auto/KubeletFlags 0.35
346 TestNetworkPlugins/group/auto/NetCatPod 9.24
347 TestNetworkPlugins/group/kindnet/Start 39.06
348 TestNetworkPlugins/group/auto/DNS 0.13
349 TestNetworkPlugins/group/auto/Localhost 0.11
350 TestNetworkPlugins/group/auto/HairPin 0.12
351 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
352 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.08
353 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
354 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.1
355 TestNetworkPlugins/group/calico/Start 43.87
356 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.37
357 TestNetworkPlugins/group/custom-flannel/Start 54.39
358 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.08
359 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
360 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
361 TestStartStop/group/no-preload/serial/Pause 2.92
362 TestNetworkPlugins/group/kindnet/KubeletFlags 0.32
363 TestNetworkPlugins/group/kindnet/NetCatPod 9.19
364 TestNetworkPlugins/group/enable-default-cni/Start 73.68
365 TestNetworkPlugins/group/kindnet/DNS 0.15
366 TestNetworkPlugins/group/kindnet/Localhost 0.14
367 TestNetworkPlugins/group/kindnet/HairPin 0.13
368 TestNetworkPlugins/group/calico/ControllerPod 6.01
369 TestNetworkPlugins/group/calico/KubeletFlags 0.3
370 TestNetworkPlugins/group/flannel/Start 46.32
371 TestNetworkPlugins/group/calico/NetCatPod 9.19
372 TestNetworkPlugins/group/calico/DNS 0.14
373 TestNetworkPlugins/group/calico/Localhost 0.11
374 TestNetworkPlugins/group/calico/HairPin 0.11
375 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
376 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.18
377 TestNetworkPlugins/group/custom-flannel/DNS 0.15
378 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
379 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
380 TestNetworkPlugins/group/bridge/Start 35.26
381 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
382 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.2
383 TestNetworkPlugins/group/flannel/ControllerPod 6
384 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
385 TestNetworkPlugins/group/flannel/NetCatPod 9.17
386 TestNetworkPlugins/group/enable-default-cni/DNS 0.14
387 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
388 TestNetworkPlugins/group/enable-default-cni/HairPin 0.1
389 TestNetworkPlugins/group/flannel/DNS 0.13
390 TestNetworkPlugins/group/flannel/Localhost 0.11
391 TestNetworkPlugins/group/flannel/HairPin 0.12
392 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
393 TestNetworkPlugins/group/bridge/NetCatPod 9.19
394 TestNetworkPlugins/group/bridge/DNS 0.12
395 TestNetworkPlugins/group/bridge/Localhost 0.11
396 TestNetworkPlugins/group/bridge/HairPin 0.11
x
+
TestDownloadOnly/v1.28.0/json-events (14.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-788217 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-788217 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (14.373371265s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (14.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1002 06:05:15.798229  379278 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1002 06:05:15.798368  379278 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-375701/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-788217
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-788217: exit status 85 (59.98135ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-788217 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-788217 │ jenkins │ v1.37.0 │ 02 Oct 25 06:05 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 06:05:01
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 06:05:01.466965  379289 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:05:01.467215  379289 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:05:01.467226  379289 out.go:374] Setting ErrFile to fd 2...
	I1002 06:05:01.467230  379289 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:05:01.467419  379289 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-375701/.minikube/bin
	W1002 06:05:01.467558  379289 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21643-375701/.minikube/config/config.json: open /home/jenkins/minikube-integration/21643-375701/.minikube/config/config.json: no such file or directory
	I1002 06:05:01.468065  379289 out.go:368] Setting JSON to true
	I1002 06:05:01.469015  379289 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":6444,"bootTime":1759378657,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 06:05:01.469109  379289 start.go:140] virtualization: kvm guest
	I1002 06:05:01.471286  379289 out.go:99] [download-only-788217] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1002 06:05:01.471440  379289 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21643-375701/.minikube/cache/preloaded-tarball: no such file or directory
	I1002 06:05:01.471477  379289 notify.go:220] Checking for updates...
	I1002 06:05:01.472708  379289 out.go:171] MINIKUBE_LOCATION=21643
	I1002 06:05:01.474110  379289 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 06:05:01.475349  379289 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21643-375701/kubeconfig
	I1002 06:05:01.476423  379289 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-375701/.minikube
	I1002 06:05:01.477569  379289 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1002 06:05:01.479857  379289 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1002 06:05:01.480113  379289 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 06:05:01.502901  379289 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I1002 06:05:01.503030  379289 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:05:01.555651  379289 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-10-02 06:05:01.545883133 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:05:01.555764  379289 docker.go:318] overlay module found
	I1002 06:05:01.557344  379289 out.go:99] Using the docker driver based on user configuration
	I1002 06:05:01.557371  379289 start.go:304] selected driver: docker
	I1002 06:05:01.557377  379289 start.go:924] validating driver "docker" against <nil>
	I1002 06:05:01.557464  379289 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:05:01.609570  379289 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-10-02 06:05:01.599801334 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:05:01.609747  379289 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 06:05:01.610276  379289 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1002 06:05:01.610486  379289 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 06:05:01.612047  379289 out.go:171] Using Docker driver with root privileges
	I1002 06:05:01.613016  379289 cni.go:84] Creating CNI manager for ""
	I1002 06:05:01.613071  379289 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1002 06:05:01.613082  379289 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 06:05:01.613154  379289 start.go:348] cluster config:
	{Name:download-only-788217 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-788217 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:05:01.614437  379289 out.go:99] Starting "download-only-788217" primary control-plane node in "download-only-788217" cluster
	I1002 06:05:01.614455  379289 cache.go:123] Beginning downloading kic base image for docker with containerd
	I1002 06:05:01.615443  379289 out.go:99] Pulling base image v0.0.48-1759382731-21643 ...
	I1002 06:05:01.615466  379289 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1002 06:05:01.615572  379289 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 06:05:01.630966  379289 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1002 06:05:01.631181  379289 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory
	I1002 06:05:01.631275  379289 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1002 06:05:01.970197  379289 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I1002 06:05:01.970228  379289 cache.go:58] Caching tarball of preloaded images
	I1002 06:05:01.970378  379289 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1002 06:05:01.972390  379289 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1002 06:05:01.972406  379289 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4 from gcs api...
	I1002 06:05:02.074999  379289 preload.go:290] Got checksum from GCS API "2746dfda401436a5341e0500068bf339"
	I1002 06:05:02.075163  379289 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:2746dfda401436a5341e0500068bf339 -> /home/jenkins/minikube-integration/21643-375701/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-788217 host does not exist
	  To start a cluster, run: "minikube start -p download-only-788217"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-788217
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (11.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-662385 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-662385 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (11.287757525s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (11.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1002 06:05:27.470733  379278 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
I1002 06:05:27.470770  379278 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-375701/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-662385
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-662385: exit status 85 (57.58348ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-788217 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-788217 │ jenkins │ v1.37.0 │ 02 Oct 25 06:05 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 02 Oct 25 06:05 UTC │ 02 Oct 25 06:05 UTC │
	│ delete  │ -p download-only-788217                                                                                                                                                               │ download-only-788217 │ jenkins │ v1.37.0 │ 02 Oct 25 06:05 UTC │ 02 Oct 25 06:05 UTC │
	│ start   │ -o=json --download-only -p download-only-662385 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-662385 │ jenkins │ v1.37.0 │ 02 Oct 25 06:05 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 06:05:16
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 06:05:16.225585  379670 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:05:16.225836  379670 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:05:16.225844  379670 out.go:374] Setting ErrFile to fd 2...
	I1002 06:05:16.225848  379670 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:05:16.226038  379670 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-375701/.minikube/bin
	I1002 06:05:16.226487  379670 out.go:368] Setting JSON to true
	I1002 06:05:16.227361  379670 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":6459,"bootTime":1759378657,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 06:05:16.227451  379670 start.go:140] virtualization: kvm guest
	I1002 06:05:16.229177  379670 out.go:99] [download-only-662385] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 06:05:16.229339  379670 notify.go:220] Checking for updates...
	I1002 06:05:16.230430  379670 out.go:171] MINIKUBE_LOCATION=21643
	I1002 06:05:16.231669  379670 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 06:05:16.232892  379670 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21643-375701/kubeconfig
	I1002 06:05:16.234071  379670 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-375701/.minikube
	I1002 06:05:16.235228  379670 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1002 06:05:16.237223  379670 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1002 06:05:16.237465  379670 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 06:05:16.260617  379670 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I1002 06:05:16.260725  379670 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:05:16.313349  379670 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-02 06:05:16.302932002 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:05:16.313451  379670 docker.go:318] overlay module found
	I1002 06:05:16.314961  379670 out.go:99] Using the docker driver based on user configuration
	I1002 06:05:16.314998  379670 start.go:304] selected driver: docker
	I1002 06:05:16.315007  379670 start.go:924] validating driver "docker" against <nil>
	I1002 06:05:16.315089  379670 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:05:16.370625  379670 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-02 06:05:16.361043579 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:05:16.370774  379670 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 06:05:16.371267  379670 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1002 06:05:16.371404  379670 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 06:05:16.372999  379670 out.go:171] Using Docker driver with root privileges
	I1002 06:05:16.374153  379670 cni.go:84] Creating CNI manager for ""
	I1002 06:05:16.374217  379670 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1002 06:05:16.374228  379670 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 06:05:16.374311  379670 start.go:348] cluster config:
	{Name:download-only-662385 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-662385 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:05:16.375480  379670 out.go:99] Starting "download-only-662385" primary control-plane node in "download-only-662385" cluster
	I1002 06:05:16.375503  379670 cache.go:123] Beginning downloading kic base image for docker with containerd
	I1002 06:05:16.376468  379670 out.go:99] Pulling base image v0.0.48-1759382731-21643 ...
	I1002 06:05:16.376490  379670 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1002 06:05:16.376629  379670 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 06:05:16.392541  379670 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1002 06:05:16.392658  379670 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory
	I1002 06:05:16.392675  379670 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory, skipping pull
	I1002 06:05:16.392679  379670 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in cache, skipping pull
	I1002 06:05:16.392686  379670 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d as a tarball
	I1002 06:05:16.724634  379670 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1002 06:05:16.724673  379670 cache.go:58] Caching tarball of preloaded images
	I1002 06:05:16.724833  379670 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1002 06:05:16.726531  379670 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1002 06:05:16.726550  379670 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 from gcs api...
	I1002 06:05:16.824743  379670 preload.go:290] Got checksum from GCS API "5d6e976daeaa84851976fc4d674fd8f4"
	I1002 06:05:16.824794  379670 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4?checksum=md5:5d6e976daeaa84851976fc4d674fd8f4 -> /home/jenkins/minikube-integration/21643-375701/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-662385 host does not exist
	  To start a cluster, run: "minikube start -p download-only-662385"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-662385
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.36s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-176670 --alsologtostderr --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "download-docker-176670" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-176670
--- PASS: TestDownloadOnlyKic (0.36s)

                                                
                                    
x
+
TestBinaryMirror (0.77s)

                                                
                                                
=== RUN   TestBinaryMirror
I1002 06:05:28.462548  379278 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-013897 --alsologtostderr --binary-mirror http://127.0.0.1:33585 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-013897" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-013897
--- PASS: TestBinaryMirror (0.77s)

                                                
                                    
x
+
TestOffline (54.22s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-566176 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-566176 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (51.752142966s)
helpers_test.go:175: Cleaning up "offline-containerd-566176" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-566176
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-566176: (2.470092105s)
--- PASS: TestOffline (54.22s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-304257
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-304257: exit status 85 (50.337991ms)

                                                
                                                
-- stdout --
	* Profile "addons-304257" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-304257"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-304257
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-304257: exit status 85 (51.542162ms)

                                                
                                                
-- stdout --
	* Profile "addons-304257" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-304257"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (163.23s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-304257 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-304257 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m43.227711423s)
--- PASS: TestAddons/Setup (163.23s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.19s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:868: volcano-scheduler stabilized in 14.915834ms
addons_test.go:876: volcano-admission stabilized in 15.295975ms
addons_test.go:884: volcano-controller stabilized in 15.795055ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-fjrnk" [b5079a4a-e9fe-437c-8a1f-35957b96e223] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.002713439s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-6c447bd768-s9576" [d4d1b256-8642-4026-9ccb-2ebfd41c7619] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003590999s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-6fd4f85cb8-rxkr2" [e95579d5-65f8-4472-a9b6-4b7976d139f5] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003394181s
addons_test.go:903: (dbg) Run:  kubectl --context addons-304257 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-304257 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-304257 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [4708ec93-1c93-48a7-b01b-360e8c25d208] Pending
helpers_test.go:352: "test-job-nginx-0" [4708ec93-1c93-48a7-b01b-360e8c25d208] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [4708ec93-1c93-48a7-b01b-360e8c25d208] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.003396862s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-304257 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-304257 addons disable volcano --alsologtostderr -v=1: (11.785574009s)
--- PASS: TestAddons/serial/Volcano (40.19s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-304257 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-304257 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.45s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-304257 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-304257 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [fba6d47e-c7c1-4f1e-8235-704a261bb5cf] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [fba6d47e-c7c1-4f1e-8235-704a261bb5cf] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003164702s
addons_test.go:694: (dbg) Run:  kubectl --context addons-304257 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-304257 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-304257 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.45s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.22s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.055775ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-g2t8p" [e9c062ca-7ea0-4229-9f16-4aec3fe03d37] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002958876s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-vjpvw" [19e8ba42-02f1-4cb5-938d-ac34cb02e140] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003212416s
addons_test.go:392: (dbg) Run:  kubectl --context addons-304257 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-304257 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-304257 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.473964568s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-304257 ip
2025/10/02 06:09:25 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-304257 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.22s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.88s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.222413ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-304257
addons_test.go:332: (dbg) Run:  kubectl --context addons-304257 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-304257 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.88s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.2s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-304257 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-304257 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-304257 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [df0fbca9-4191-41f2-896a-b758fec16a45] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [df0fbca9-4191-41f2-896a-b758fec16a45] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003432668s
I1002 06:09:21.581653  379278 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-304257 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-304257 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-304257 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-304257 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-304257 addons disable ingress-dns --alsologtostderr -v=1: (1.265722344s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-304257 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-304257 addons disable ingress --alsologtostderr -v=1: (7.742466149s)
--- PASS: TestAddons/parallel/Ingress (20.20s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-gssrl" [0f291b0f-f7fb-4662-bd4c-b150eaef0235] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.00345907s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-304257 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (5.26s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.8s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 2.349725ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-f5dcd" [e55a25e9-4678-49f8-a450-e4c2d12e365c] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.002974379s
addons_test.go:463: (dbg) Run:  kubectl --context addons-304257 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-304257 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.80s)

                                                
                                    
x
+
TestAddons/parallel/CSI (44.85s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1002 06:09:31.341404  379278 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1002 06:09:31.344844  379278 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1002 06:09:31.344868  379278 kapi.go:107] duration metric: took 3.472143ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.481458ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-304257 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-304257 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-304257 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-304257 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-304257 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-304257 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [3918f5bc-8215-46cb-92c2-440a22c505a2] Pending
helpers_test.go:352: "task-pv-pod" [3918f5bc-8215-46cb-92c2-440a22c505a2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [3918f5bc-8215-46cb-92c2-440a22c505a2] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.003420406s
addons_test.go:572: (dbg) Run:  kubectl --context addons-304257 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-304257 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-304257 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-304257 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-304257 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-304257 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-304257 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-304257 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-304257 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-304257 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-304257 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-304257 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-304257 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-304257 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-304257 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-304257 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-304257 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-304257 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-304257 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-304257 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-304257 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-304257 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-304257 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [02dd1300-67fe-4dfc-b59b-2240f42d0b51] Pending
helpers_test.go:352: "task-pv-pod-restore" [02dd1300-67fe-4dfc-b59b-2240f42d0b51] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [02dd1300-67fe-4dfc-b59b-2240f42d0b51] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003283609s
addons_test.go:614: (dbg) Run:  kubectl --context addons-304257 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-304257 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-304257 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-304257 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-304257 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-304257 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.513995105s)
--- PASS: TestAddons/parallel/CSI (44.85s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.67s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-304257 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-85f8f8dc54-ngk9r" [e36b4cce-8b9a-44e0-99d5-526b66997bae] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-85f8f8dc54-ngk9r" [e36b4cce-8b9a-44e0-99d5-526b66997bae] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-85f8f8dc54-ngk9r" [e36b4cce-8b9a-44e0-99d5-526b66997bae] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.003902225s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-304257 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-304257 addons disable headlamp --alsologtostderr -v=1: (5.876585367s)
--- PASS: TestAddons/parallel/Headlamp (18.67s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.5s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-85f6b7fc65-brf7b" [71b42573-986d-4824-a44c-70e439899902] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003485598s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-304257 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.50s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.1s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-304257 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-304257 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-304257 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-304257 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-304257 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-304257 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-304257 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-304257 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [6096bab4-ac1d-448c-9802-09a2c2a69475] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [6096bab4-ac1d-448c-9802-09a2c2a69475] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [6096bab4-ac1d-448c-9802-09a2c2a69475] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.00364047s
addons_test.go:967: (dbg) Run:  kubectl --context addons-304257 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-304257 ssh "cat /opt/local-path-provisioner/pvc-e6f6f7c2-295c-45ee-8fbf-f361cd230cb6_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-304257 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-304257 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-304257 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (10.10s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.49s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-84vmq" [5d882556-da31-4779-a746-c7074d430d5a] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003059068s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-304257 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.49s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.68s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-cjbz6" [70f9d600-b472-405b-87ed-be948a4d1154] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003514651s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-304257 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-304257 addons disable yakd --alsologtostderr -v=1: (5.678879036s)
--- PASS: TestAddons/parallel/Yakd (10.68s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.47s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-p4f98" [6d69dcc9-8c14-4465-a2ed-e9a4e25d10f9] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003465133s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-304257 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (5.47s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.46s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-304257
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-304257: (12.217724779s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-304257
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-304257
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-304257
--- PASS: TestAddons/StoppedEnableDisable (12.46s)

                                                
                                    
x
+
TestCertOptions (27.2s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-436832 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-436832 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (24.167560397s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-436832 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-436832 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-436832 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-436832" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-436832
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-436832: (2.369772099s)
--- PASS: TestCertOptions (27.20s)

                                                
                                    
x
+
TestCertExpiration (213.39s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-108846 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-108846 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (24.567426715s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-108846 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-108846 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (6.441568833s)
helpers_test.go:175: Cleaning up "cert-expiration-108846" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-108846
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-108846: (2.381915599s)
--- PASS: TestCertExpiration (213.39s)

                                                
                                    
x
+
TestForceSystemdFlag (27.23s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-445130 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-445130 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (24.595716112s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-445130 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-445130" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-445130
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-445130: (2.342717872s)
--- PASS: TestForceSystemdFlag (27.23s)

                                                
                                    
x
+
TestForceSystemdEnv (27.51s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-727641 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E1002 06:46:15.582366  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/addons-304257/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-727641 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (25.222265836s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-727641 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-727641" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-727641
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-727641: (1.996988764s)
--- PASS: TestForceSystemdEnv (27.51s)

                                                
                                    
x
+
TestDockerEnvContainerd (36.47s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux amd64
docker_test.go:181: (dbg) Run:  out/minikube-linux-amd64 start -p dockerenv-407860 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-amd64 start -p dockerenv-407860 --driver=docker  --container-runtime=containerd: (20.624877167s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-amd64 docker-env --ssh-host --ssh-add -p dockerenv-407860"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXX1toyz6/agent.404990" SSH_AGENT_PID="404991" DOCKER_HOST=ssh://docker@127.0.0.1:33149 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXX1toyz6/agent.404990" SSH_AGENT_PID="404991" DOCKER_HOST=ssh://docker@127.0.0.1:33149 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXX1toyz6/agent.404990" SSH_AGENT_PID="404991" DOCKER_HOST=ssh://docker@127.0.0.1:33149 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.714402034s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXX1toyz6/agent.404990" SSH_AGENT_PID="404991" DOCKER_HOST=ssh://docker@127.0.0.1:33149 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-407860" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p dockerenv-407860
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p dockerenv-407860: (2.276667261s)
--- PASS: TestDockerEnvContainerd (36.47s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0.69s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1002 06:46:41.712614  379278 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1002 06:46:41.712753  379278 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate3773770481/001:/home/jenkins/workspace/Docker_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1002 06:46:41.742531  379278 install.go:163] /tmp/TestKVMDriverInstallOrUpdate3773770481/001/docker-machine-driver-kvm2 version is 1.1.1
W1002 06:46:41.742572  379278 install.go:76] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.37.0
W1002 06:46:41.742678  379278 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1002 06:46:41.742722  379278 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3773770481/001/docker-machine-driver-kvm2
I1002 06:46:42.261565  379278 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate3773770481/001:/home/jenkins/workspace/Docker_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1002 06:46:42.276663  379278 install.go:163] /tmp/TestKVMDriverInstallOrUpdate3773770481/001/docker-machine-driver-kvm2 version is 1.37.0
--- PASS: TestKVMDriverInstallOrUpdate (0.69s)

                                                
                                    
x
+
TestErrorSpam/setup (18.88s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-389610 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-389610 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-389610 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-389610 --driver=docker  --container-runtime=containerd: (18.883250196s)
--- PASS: TestErrorSpam/setup (18.88s)

                                                
                                    
x
+
TestErrorSpam/start (0.62s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-389610 --log_dir /tmp/nospam-389610 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-389610 --log_dir /tmp/nospam-389610 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-389610 --log_dir /tmp/nospam-389610 start --dry-run
--- PASS: TestErrorSpam/start (0.62s)

                                                
                                    
x
+
TestErrorSpam/status (0.91s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-389610 --log_dir /tmp/nospam-389610 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-389610 --log_dir /tmp/nospam-389610 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-389610 --log_dir /tmp/nospam-389610 status
--- PASS: TestErrorSpam/status (0.91s)

                                                
                                    
x
+
TestErrorSpam/pause (1.41s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-389610 --log_dir /tmp/nospam-389610 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-389610 --log_dir /tmp/nospam-389610 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-389610 --log_dir /tmp/nospam-389610 pause
--- PASS: TestErrorSpam/pause (1.41s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-389610 --log_dir /tmp/nospam-389610 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-389610 --log_dir /tmp/nospam-389610 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-389610 --log_dir /tmp/nospam-389610 unpause
--- PASS: TestErrorSpam/unpause (1.48s)

                                                
                                    
x
+
TestErrorSpam/stop (11.95s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-389610 --log_dir /tmp/nospam-389610 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-389610 --log_dir /tmp/nospam-389610 stop: (11.76627676s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-389610 --log_dir /tmp/nospam-389610 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-389610 --log_dir /tmp/nospam-389610 stop
--- PASS: TestErrorSpam/stop (11.95s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21643-375701/.minikube/files/etc/test/nested/copy/379278/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (38.28s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-199910 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-199910 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (38.284187072s)
--- PASS: TestFunctional/serial/StartWithProxy (38.28s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.01s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1002 06:12:28.873391  379278 config.go:182] Loaded profile config "functional-199910": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-199910 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-199910 --alsologtostderr -v=8: (6.005062038s)
functional_test.go:678: soft start took 6.005737699s for "functional-199910" cluster.
I1002 06:12:34.878777  379278 config.go:182] Loaded profile config "functional-199910": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (6.01s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-199910 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.5s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.50s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.81s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-199910 /tmp/TestFunctionalserialCacheCmdcacheadd_local2086298250/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 cache add minikube-local-cache-test:functional-199910
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-199910 cache add minikube-local-cache-test:functional-199910: (1.507710166s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 cache delete minikube-local-cache-test:functional-199910
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-199910
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.81s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-199910 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (265.682384ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.46s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 kubectl -- --context functional-199910 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-199910 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.49s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-199910 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1002 06:13:12.517502  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/addons-304257/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:13:12.523831  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/addons-304257/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:13:12.535178  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/addons-304257/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:13:12.556491  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/addons-304257/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:13:12.597847  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/addons-304257/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:13:12.679249  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/addons-304257/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:13:12.840810  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/addons-304257/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:13:13.162991  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/addons-304257/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:13:13.804381  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/addons-304257/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:13:15.085968  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/addons-304257/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:13:17.648085  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/addons-304257/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:13:22.769383  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/addons-304257/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-199910 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.490899501s)
functional_test.go:776: restart took 43.491008221s for "functional-199910" cluster.
I1002 06:13:24.905644  379278 config.go:182] Loaded profile config "functional-199910": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (43.49s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-199910 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-199910 logs: (1.169296029s)
--- PASS: TestFunctional/serial/LogsCmd (1.17s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.16s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 logs --file /tmp/TestFunctionalserialLogsFileCmd2213486653/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-199910 logs --file /tmp/TestFunctionalserialLogsFileCmd2213486653/001/logs.txt: (1.160440347s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.16s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.98s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-199910 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-199910
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-199910: exit status 115 (331.033626ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31801 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-199910 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.98s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-199910 config get cpus: exit status 14 (57.090633ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-199910 config get cpus: exit status 14 (60.445204ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-199910 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-199910 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (145.560725ms)

                                                
                                                
-- stdout --
	* [functional-199910] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21643-375701/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-375701/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 06:19:15.107426  429751 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:19:15.107661  429751 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:19:15.107669  429751 out.go:374] Setting ErrFile to fd 2...
	I1002 06:19:15.107673  429751 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:19:15.107872  429751 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-375701/.minikube/bin
	I1002 06:19:15.108336  429751 out.go:368] Setting JSON to false
	I1002 06:19:15.109337  429751 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":7298,"bootTime":1759378657,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 06:19:15.109422  429751 start.go:140] virtualization: kvm guest
	I1002 06:19:15.111048  429751 out.go:179] * [functional-199910] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 06:19:15.112185  429751 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 06:19:15.112180  429751 notify.go:220] Checking for updates...
	I1002 06:19:15.114069  429751 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 06:19:15.115302  429751 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-375701/kubeconfig
	I1002 06:19:15.116414  429751 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-375701/.minikube
	I1002 06:19:15.120104  429751 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 06:19:15.121081  429751 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 06:19:15.122433  429751 config.go:182] Loaded profile config "functional-199910": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1002 06:19:15.122882  429751 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 06:19:15.145335  429751 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I1002 06:19:15.145405  429751 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:19:15.197621  429751 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 06:19:15.188105913 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:19:15.197728  429751 docker.go:318] overlay module found
	I1002 06:19:15.199288  429751 out.go:179] * Using the docker driver based on existing profile
	I1002 06:19:15.200764  429751 start.go:304] selected driver: docker
	I1002 06:19:15.200788  429751 start.go:924] validating driver "docker" against &{Name:functional-199910 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-199910 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:19:15.200878  429751 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 06:19:15.202512  429751 out.go:203] 
	W1002 06:19:15.203652  429751 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1002 06:19:15.204652  429751 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-199910 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-199910 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-199910 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (147.586543ms)

                                                
                                                
-- stdout --
	* [functional-199910] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21643-375701/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-375701/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 06:19:14.961663  429669 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:19:14.961903  429669 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:19:14.961913  429669 out.go:374] Setting ErrFile to fd 2...
	I1002 06:19:14.961932  429669 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:19:14.962258  429669 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-375701/.minikube/bin
	I1002 06:19:14.962650  429669 out.go:368] Setting JSON to false
	I1002 06:19:14.963649  429669 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":7298,"bootTime":1759378657,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 06:19:14.963738  429669 start.go:140] virtualization: kvm guest
	I1002 06:19:14.965465  429669 out.go:179] * [functional-199910] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1002 06:19:14.966896  429669 notify.go:220] Checking for updates...
	I1002 06:19:14.966947  429669 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 06:19:14.968156  429669 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 06:19:14.969532  429669 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-375701/kubeconfig
	I1002 06:19:14.970690  429669 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-375701/.minikube
	I1002 06:19:14.971729  429669 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 06:19:14.972764  429669 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 06:19:14.974283  429669 config.go:182] Loaded profile config "functional-199910": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1002 06:19:14.974752  429669 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 06:19:14.999465  429669 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I1002 06:19:14.999612  429669 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:19:15.051670  429669 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 06:19:15.041868631 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:19:15.051777  429669 docker.go:318] overlay module found
	I1002 06:19:15.053489  429669 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1002 06:19:15.054618  429669 start.go:304] selected driver: docker
	I1002 06:19:15.054632  429669 start.go:924] validating driver "docker" against &{Name:functional-199910 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-199910 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:19:15.054724  429669 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 06:19:15.056438  429669 out.go:203] 
	W1002 06:19:15.057453  429669 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1002 06:19:15.058528  429669 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 ssh -n functional-199910 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 cp functional-199910:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1186682746/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 ssh -n functional-199910 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 ssh -n functional-199910 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (18.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-199910 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-cvvj2" [ecd69e64-e562-48b7-934f-9557e3c143ee] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-cvvj2" [ecd69e64-e562-48b7-934f-9557e3c143ee] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 15.003251486s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-199910 exec mysql-5bb876957f-cvvj2 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-199910 exec mysql-5bb876957f-cvvj2 -- mysql -ppassword -e "show databases;": exit status 1 (109.982085ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1002 06:13:47.156949  379278 retry.go:31] will retry after 1.418471287s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-199910 exec mysql-5bb876957f-cvvj2 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-199910 exec mysql-5bb876957f-cvvj2 -- mysql -ppassword -e "show databases;": exit status 1 (100.012592ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1002 06:13:48.676545  379278 retry.go:31] will retry after 1.996748885s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-199910 exec mysql-5bb876957f-cvvj2 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (18.91s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/379278/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 ssh "sudo cat /etc/test/nested/copy/379278/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/379278.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 ssh "sudo cat /etc/ssl/certs/379278.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/379278.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 ssh "sudo cat /usr/share/ca-certificates/379278.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3792782.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 ssh "sudo cat /etc/ssl/certs/3792782.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/3792782.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 ssh "sudo cat /usr/share/ca-certificates/3792782.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-199910 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-199910 ssh "sudo systemctl is-active docker": exit status 1 (284.948933ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-199910 ssh "sudo systemctl is-active crio": exit status 1 (296.149579ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-199910 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-199910
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-199910
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-199910 image ls --format short --alsologtostderr:
I1002 06:19:43.604602  431643 out.go:360] Setting OutFile to fd 1 ...
I1002 06:19:43.604851  431643 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 06:19:43.604861  431643 out.go:374] Setting ErrFile to fd 2...
I1002 06:19:43.604867  431643 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 06:19:43.605081  431643 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-375701/.minikube/bin
I1002 06:19:43.605679  431643 config.go:182] Loaded profile config "functional-199910": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1002 06:19:43.605790  431643 config.go:182] Loaded profile config "functional-199910": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1002 06:19:43.606148  431643 cli_runner.go:164] Run: docker container inspect functional-199910 --format={{.State.Status}}
I1002 06:19:43.624347  431643 ssh_runner.go:195] Run: systemctl --version
I1002 06:19:43.624392  431643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-199910
I1002 06:19:43.640609  431643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/21643-375701/.minikube/machines/functional-199910/id_rsa Username:docker}
I1002 06:19:43.738825  431643 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-199910 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:cd073f │ 320kB  │
│ docker.io/library/minikube-local-cache-test │ functional-199910  │ sha256:cc10d7 │ 991B   │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:56cc51 │ 2.4MB  │
│ registry.k8s.io/kube-apiserver              │ v1.34.1            │ sha256:c3994b │ 27.1MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.1            │ sha256:7dd6aa │ 17.4MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:da86e6 │ 315kB  │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:0184c1 │ 298kB  │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:409467 │ 44.4MB │
│ docker.io/library/mysql                     │ 5.7                │ sha256:510733 │ 138MB  │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:6e38f4 │ 9.06MB │
│ registry.k8s.io/etcd                        │ 3.6.4-0            │ sha256:5f1f52 │ 74.3MB │
│ registry.k8s.io/pause                       │ latest             │ sha256:350b16 │ 72.3kB │
│ docker.io/kicbase/echo-server               │ functional-199910  │ sha256:9056ab │ 2.37MB │
│ localhost/my-image                          │ functional-199910  │ sha256:60fd8f │ 775kB  │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:52546a │ 22.4MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.1            │ sha256:c80c8d │ 22.8MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.1            │ sha256:fc2517 │ 26MB   │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-199910 image ls --format table --alsologtostderr:
I1002 06:19:47.435302  432155 out.go:360] Setting OutFile to fd 1 ...
I1002 06:19:47.435698  432155 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 06:19:47.435707  432155 out.go:374] Setting ErrFile to fd 2...
I1002 06:19:47.435711  432155 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 06:19:47.435882  432155 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-375701/.minikube/bin
I1002 06:19:47.436461  432155 config.go:182] Loaded profile config "functional-199910": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1002 06:19:47.436547  432155 config.go:182] Loaded profile config "functional-199910": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1002 06:19:47.436872  432155 cli_runner.go:164] Run: docker container inspect functional-199910 --format={{.State.Status}}
I1002 06:19:47.454145  432155 ssh_runner.go:195] Run: systemctl --version
I1002 06:19:47.454184  432155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-199910
I1002 06:19:47.470532  432155 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/21643-375701/.minikube/machines/functional-199910/id_rsa Username:docker}
I1002 06:19:47.570037  432155 ssh_runner.go:195] Run: sudo crictl images --output json
E1002 06:23:12.516977  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/addons-304257/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-199910 image ls --format json --alsologtostderr:
[{"id":"sha256:60fd8f09e6055308ae6ff0de97111306fa3d0617740c074755101bc6be20338c","repoDigests":[],"repoTags":["localhost/my-image:functional-199910"],"size":"774889"},{"id":"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"22384805"},{"id":"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"27061991"},{"id":"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"17385568"},{"id":"sha256:409467f978b
4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"44375501"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-199910"],"size":"2372971"},{"id":"sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae5
6536f4b8156d78bae03c0145cbe47c2aad73bb"],"repoTags":["docker.io/library/mysql:5.7"],"size":"137909886"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"74311308"},{"id":"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"22820214"},{"id":"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["regist
ry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"25963718"},{"id":"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"320448"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:cc10d72a40f7ef09b8995231c4a8571eebfeb36c64031d0dbc0fa0c8e894cdb9","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-199910"],"size":"991"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-199910 image ls --format json --alsologtostderr:
I1002 06:19:47.227601  432101 out.go:360] Setting OutFile to fd 1 ...
I1002 06:19:47.228086  432101 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 06:19:47.228096  432101 out.go:374] Setting ErrFile to fd 2...
I1002 06:19:47.228101  432101 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 06:19:47.228384  432101 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-375701/.minikube/bin
I1002 06:19:47.229022  432101 config.go:182] Loaded profile config "functional-199910": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1002 06:19:47.229135  432101 config.go:182] Loaded profile config "functional-199910": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1002 06:19:47.229470  432101 cli_runner.go:164] Run: docker container inspect functional-199910 --format={{.State.Status}}
I1002 06:19:47.247276  432101 ssh_runner.go:195] Run: systemctl --version
I1002 06:19:47.247338  432101 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-199910
I1002 06:19:47.264601  432101 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/21643-375701/.minikube/machines/functional-199910/id_rsa Username:docker}
I1002 06:19:47.363123  432101 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-199910 image ls --format yaml --alsologtostderr:
- id: sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "25963718"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "22384805"
- id: sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "74311308"
- id: sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "22820214"
- id: sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "320448"
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-199910
size: "2372971"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
repoTags:
- docker.io/library/mysql:5.7
size: "137909886"
- id: sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "27061991"
- id: sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "17385568"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "44375501"
- id: sha256:cc10d72a40f7ef09b8995231c4a8571eebfeb36c64031d0dbc0fa0c8e894cdb9
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-199910
size: "991"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-199910 image ls --format yaml --alsologtostderr:
I1002 06:19:43.812696  431712 out.go:360] Setting OutFile to fd 1 ...
I1002 06:19:43.813042  431712 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 06:19:43.813056  431712 out.go:374] Setting ErrFile to fd 2...
I1002 06:19:43.813063  431712 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 06:19:43.813269  431712 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-375701/.minikube/bin
I1002 06:19:43.813881  431712 config.go:182] Loaded profile config "functional-199910": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1002 06:19:43.814007  431712 config.go:182] Loaded profile config "functional-199910": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1002 06:19:43.814355  431712 cli_runner.go:164] Run: docker container inspect functional-199910 --format={{.State.Status}}
I1002 06:19:43.833033  431712 ssh_runner.go:195] Run: systemctl --version
I1002 06:19:43.833076  431712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-199910
I1002 06:19:43.849828  431712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/21643-375701/.minikube/machines/functional-199910/id_rsa Username:docker}
I1002 06:19:43.948975  431712 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-199910 ssh pgrep buildkitd: exit status 1 (248.877571ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 image build -t localhost/my-image:functional-199910 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-199910 image build -t localhost/my-image:functional-199910 testdata/build --alsologtostderr: (2.746744827s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-199910 image build -t localhost/my-image:functional-199910 testdata/build --alsologtostderr:
I1002 06:19:44.273094  431871 out.go:360] Setting OutFile to fd 1 ...
I1002 06:19:44.273368  431871 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 06:19:44.273378  431871 out.go:374] Setting ErrFile to fd 2...
I1002 06:19:44.273382  431871 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 06:19:44.273599  431871 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-375701/.minikube/bin
I1002 06:19:44.274222  431871 config.go:182] Loaded profile config "functional-199910": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1002 06:19:44.274850  431871 config.go:182] Loaded profile config "functional-199910": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1002 06:19:44.275312  431871 cli_runner.go:164] Run: docker container inspect functional-199910 --format={{.State.Status}}
I1002 06:19:44.292548  431871 ssh_runner.go:195] Run: systemctl --version
I1002 06:19:44.292595  431871 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-199910
I1002 06:19:44.309267  431871 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/21643-375701/.minikube/machines/functional-199910/id_rsa Username:docker}
I1002 06:19:44.408088  431871 build_images.go:161] Building image from path: /tmp/build.2056525975.tar
I1002 06:19:44.408150  431871 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1002 06:19:44.415877  431871 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2056525975.tar
I1002 06:19:44.419473  431871 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2056525975.tar: stat -c "%s %y" /var/lib/minikube/build/build.2056525975.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2056525975.tar': No such file or directory
I1002 06:19:44.419503  431871 ssh_runner.go:362] scp /tmp/build.2056525975.tar --> /var/lib/minikube/build/build.2056525975.tar (3072 bytes)
I1002 06:19:44.436539  431871 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2056525975
I1002 06:19:44.443596  431871 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2056525975 -xf /var/lib/minikube/build/build.2056525975.tar
I1002 06:19:44.450759  431871 containerd.go:394] Building image: /var/lib/minikube/build/build.2056525975
I1002 06:19:44.450805  431871 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2056525975 --local dockerfile=/var/lib/minikube/build/build.2056525975 --output type=image,name=localhost/my-image:functional-199910
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.7s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.4s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:f1b0a98da58f8db540d5ec24da5231a25886c78b87c411a5e7782b78256263ef done
#8 exporting config sha256:60fd8f09e6055308ae6ff0de97111306fa3d0617740c074755101bc6be20338c done
#8 naming to localhost/my-image:functional-199910 done
#8 DONE 0.1s
I1002 06:19:46.953769  431871 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2056525975 --local dockerfile=/var/lib/minikube/build/build.2056525975 --output type=image,name=localhost/my-image:functional-199910: (2.502935086s)
I1002 06:19:46.953825  431871 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2056525975
I1002 06:19:46.962227  431871 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2056525975.tar
I1002 06:19:46.969554  431871 build_images.go:217] Built localhost/my-image:functional-199910 from /tmp/build.2056525975.tar
I1002 06:19:46.969592  431871 build_images.go:133] succeeded building to: functional-199910
I1002 06:19:46.969597  431871 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.70508368s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-199910
E1002 06:13:33.010977  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/addons-304257/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 image load --daemon kicbase/echo-server:functional-199910 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-199910 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-199910 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-199910 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 422406: os: process already finished
helpers_test.go:519: unable to terminate pid 422098: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-199910 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-199910 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 image load --daemon kicbase/echo-server:functional-199910 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-199910
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 image load --daemon kicbase/echo-server:functional-199910 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 image save kicbase/echo-server:functional-199910 /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 image rm kicbase/echo-server:functional-199910 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-199910
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 image save --daemon kicbase/echo-server:functional-199910 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-199910
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-199910 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "318.108247ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "48.316728ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "322.840031ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "49.073802ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-199910 /tmp/TestFunctionalparallelMountCmdany-port999403763/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1759385943040280221" to /tmp/TestFunctionalparallelMountCmdany-port999403763/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1759385943040280221" to /tmp/TestFunctionalparallelMountCmdany-port999403763/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1759385943040280221" to /tmp/TestFunctionalparallelMountCmdany-port999403763/001/test-1759385943040280221
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-199910 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (260.848336ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1002 06:19:03.301376  379278 retry.go:31] will retry after 277.843998ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  2 06:19 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  2 06:19 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  2 06:19 test-1759385943040280221
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 ssh cat /mount-9p/test-1759385943040280221
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-199910 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [0991bb34-4e7d-4e9d-8c6c-341d2b6694f8] Pending
helpers_test.go:352: "busybox-mount" [0991bb34-4e7d-4e9d-8c6c-341d2b6694f8] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [0991bb34-4e7d-4e9d-8c6c-341d2b6694f8] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [0991bb34-4e7d-4e9d-8c6c-341d2b6694f8] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003647919s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-199910 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-199910 /tmp/TestFunctionalparallelMountCmdany-port999403763/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-199910 /tmp/TestFunctionalparallelMountCmdspecific-port2307594671/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-199910 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (269.501091ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1002 06:19:10.677342  379278 retry.go:31] will retry after 582.375069ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-199910 /tmp/TestFunctionalparallelMountCmdspecific-port2307594671/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-199910 ssh "sudo umount -f /mount-9p": exit status 1 (252.203768ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-199910 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-199910 /tmp/TestFunctionalparallelMountCmdspecific-port2307594671/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-199910 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1912355386/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-199910 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1912355386/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-199910 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1912355386/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-199910 ssh "findmnt -T" /mount1: exit status 1 (307.664643ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1002 06:19:12.538766  379278 retry.go:31] will retry after 671.507195ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-199910 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-199910 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1912355386/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-199910 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1912355386/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-199910 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1912355386/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-199910 service list: (1.68908168s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-199910 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-199910 service list -o json: (1.682992724s)
functional_test.go:1504: Took "1.683078735s" to run "out/minikube-linux-amd64 -p functional-199910 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.68s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-199910
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-199910
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-199910
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (116.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-092353 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (1m55.978178681s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (116.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-092353 kubectl -- rollout status deployment/busybox: (3.174250246s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 kubectl -- exec busybox-7b57f96db7-g22lz -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 kubectl -- exec busybox-7b57f96db7-rnfpd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 kubectl -- exec busybox-7b57f96db7-w8b9t -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 kubectl -- exec busybox-7b57f96db7-g22lz -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 kubectl -- exec busybox-7b57f96db7-rnfpd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 kubectl -- exec busybox-7b57f96db7-w8b9t -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 kubectl -- exec busybox-7b57f96db7-g22lz -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 kubectl -- exec busybox-7b57f96db7-rnfpd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 kubectl -- exec busybox-7b57f96db7-w8b9t -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 kubectl -- exec busybox-7b57f96db7-g22lz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 kubectl -- exec busybox-7b57f96db7-g22lz -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 kubectl -- exec busybox-7b57f96db7-rnfpd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 kubectl -- exec busybox-7b57f96db7-rnfpd -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 kubectl -- exec busybox-7b57f96db7-w8b9t -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 kubectl -- exec busybox-7b57f96db7-w8b9t -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (24.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-092353 node add --alsologtostderr -v 5: (23.852888275s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (24.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-092353 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 cp testdata/cp-test.txt ha-092353:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 ssh -n ha-092353 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 cp ha-092353:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1347850495/001/cp-test_ha-092353.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 ssh -n ha-092353 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 cp ha-092353:/home/docker/cp-test.txt ha-092353-m02:/home/docker/cp-test_ha-092353_ha-092353-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 ssh -n ha-092353 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 ssh -n ha-092353-m02 "sudo cat /home/docker/cp-test_ha-092353_ha-092353-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 cp ha-092353:/home/docker/cp-test.txt ha-092353-m03:/home/docker/cp-test_ha-092353_ha-092353-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 ssh -n ha-092353 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 ssh -n ha-092353-m03 "sudo cat /home/docker/cp-test_ha-092353_ha-092353-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 cp ha-092353:/home/docker/cp-test.txt ha-092353-m04:/home/docker/cp-test_ha-092353_ha-092353-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 ssh -n ha-092353 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 ssh -n ha-092353-m04 "sudo cat /home/docker/cp-test_ha-092353_ha-092353-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 cp testdata/cp-test.txt ha-092353-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 ssh -n ha-092353-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 cp ha-092353-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1347850495/001/cp-test_ha-092353-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 ssh -n ha-092353-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 cp ha-092353-m02:/home/docker/cp-test.txt ha-092353:/home/docker/cp-test_ha-092353-m02_ha-092353.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 ssh -n ha-092353-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 ssh -n ha-092353 "sudo cat /home/docker/cp-test_ha-092353-m02_ha-092353.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 cp ha-092353-m02:/home/docker/cp-test.txt ha-092353-m03:/home/docker/cp-test_ha-092353-m02_ha-092353-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 ssh -n ha-092353-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 ssh -n ha-092353-m03 "sudo cat /home/docker/cp-test_ha-092353-m02_ha-092353-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 cp ha-092353-m02:/home/docker/cp-test.txt ha-092353-m04:/home/docker/cp-test_ha-092353-m02_ha-092353-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 ssh -n ha-092353-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 ssh -n ha-092353-m04 "sudo cat /home/docker/cp-test_ha-092353-m02_ha-092353-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 cp testdata/cp-test.txt ha-092353-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 ssh -n ha-092353-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 cp ha-092353-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1347850495/001/cp-test_ha-092353-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 ssh -n ha-092353-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 cp ha-092353-m03:/home/docker/cp-test.txt ha-092353:/home/docker/cp-test_ha-092353-m03_ha-092353.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 ssh -n ha-092353-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 ssh -n ha-092353 "sudo cat /home/docker/cp-test_ha-092353-m03_ha-092353.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 cp ha-092353-m03:/home/docker/cp-test.txt ha-092353-m02:/home/docker/cp-test_ha-092353-m03_ha-092353-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 ssh -n ha-092353-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 ssh -n ha-092353-m02 "sudo cat /home/docker/cp-test_ha-092353-m03_ha-092353-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 cp ha-092353-m03:/home/docker/cp-test.txt ha-092353-m04:/home/docker/cp-test_ha-092353-m03_ha-092353-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 ssh -n ha-092353-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 ssh -n ha-092353-m04 "sudo cat /home/docker/cp-test_ha-092353-m03_ha-092353-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 cp testdata/cp-test.txt ha-092353-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 ssh -n ha-092353-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 cp ha-092353-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1347850495/001/cp-test_ha-092353-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 ssh -n ha-092353-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 cp ha-092353-m04:/home/docker/cp-test.txt ha-092353:/home/docker/cp-test_ha-092353-m04_ha-092353.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 ssh -n ha-092353-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 ssh -n ha-092353 "sudo cat /home/docker/cp-test_ha-092353-m04_ha-092353.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 cp ha-092353-m04:/home/docker/cp-test.txt ha-092353-m02:/home/docker/cp-test_ha-092353-m04_ha-092353-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 ssh -n ha-092353-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 ssh -n ha-092353-m02 "sudo cat /home/docker/cp-test_ha-092353-m04_ha-092353-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 cp ha-092353-m04:/home/docker/cp-test.txt ha-092353-m03:/home/docker/cp-test_ha-092353-m04_ha-092353-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 ssh -n ha-092353-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 ssh -n ha-092353-m03 "sudo cat /home/docker/cp-test_ha-092353-m04_ha-092353-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-092353 node stop m02 --alsologtostderr -v 5: (11.874149749s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-092353 status --alsologtostderr -v 5: exit status 7 (680.252063ms)

                                                
                                                
-- stdout --
	ha-092353
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-092353-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-092353-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-092353-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 06:27:17.084522  456817 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:27:17.084815  456817 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:27:17.084826  456817 out.go:374] Setting ErrFile to fd 2...
	I1002 06:27:17.084830  456817 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:27:17.085072  456817 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-375701/.minikube/bin
	I1002 06:27:17.085276  456817 out.go:368] Setting JSON to false
	I1002 06:27:17.085304  456817 mustload.go:65] Loading cluster: ha-092353
	I1002 06:27:17.085406  456817 notify.go:220] Checking for updates...
	I1002 06:27:17.085645  456817 config.go:182] Loaded profile config "ha-092353": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1002 06:27:17.085657  456817 status.go:174] checking status of ha-092353 ...
	I1002 06:27:17.086160  456817 cli_runner.go:164] Run: docker container inspect ha-092353 --format={{.State.Status}}
	I1002 06:27:17.106407  456817 status.go:371] ha-092353 host status = "Running" (err=<nil>)
	I1002 06:27:17.106426  456817 host.go:66] Checking if "ha-092353" exists ...
	I1002 06:27:17.106642  456817 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-092353
	I1002 06:27:17.123951  456817 host.go:66] Checking if "ha-092353" exists ...
	I1002 06:27:17.124168  456817 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 06:27:17.124216  456817 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-092353
	I1002 06:27:17.146292  456817 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/21643-375701/.minikube/machines/ha-092353/id_rsa Username:docker}
	I1002 06:27:17.243773  456817 ssh_runner.go:195] Run: systemctl --version
	I1002 06:27:17.249744  456817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 06:27:17.261618  456817 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:27:17.313668  456817 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-02 06:27:17.303720516 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:27:17.314488  456817 kubeconfig.go:125] found "ha-092353" server: "https://192.168.49.254:8443"
	I1002 06:27:17.314527  456817 api_server.go:166] Checking apiserver status ...
	I1002 06:27:17.314583  456817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:27:17.327014  456817 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1407/cgroup
	W1002 06:27:17.335903  456817 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1407/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1002 06:27:17.335962  456817 ssh_runner.go:195] Run: ls
	I1002 06:27:17.339474  456817 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1002 06:27:17.345251  456817 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1002 06:27:17.345271  456817 status.go:463] ha-092353 apiserver status = Running (err=<nil>)
	I1002 06:27:17.345280  456817 status.go:176] ha-092353 status: &{Name:ha-092353 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 06:27:17.345300  456817 status.go:174] checking status of ha-092353-m02 ...
	I1002 06:27:17.345523  456817 cli_runner.go:164] Run: docker container inspect ha-092353-m02 --format={{.State.Status}}
	I1002 06:27:17.362889  456817 status.go:371] ha-092353-m02 host status = "Stopped" (err=<nil>)
	I1002 06:27:17.362908  456817 status.go:384] host is not running, skipping remaining checks
	I1002 06:27:17.362926  456817 status.go:176] ha-092353-m02 status: &{Name:ha-092353-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 06:27:17.362950  456817 status.go:174] checking status of ha-092353-m03 ...
	I1002 06:27:17.363165  456817 cli_runner.go:164] Run: docker container inspect ha-092353-m03 --format={{.State.Status}}
	I1002 06:27:17.380506  456817 status.go:371] ha-092353-m03 host status = "Running" (err=<nil>)
	I1002 06:27:17.380525  456817 host.go:66] Checking if "ha-092353-m03" exists ...
	I1002 06:27:17.380767  456817 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-092353-m03
	I1002 06:27:17.397733  456817 host.go:66] Checking if "ha-092353-m03" exists ...
	I1002 06:27:17.398089  456817 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 06:27:17.398136  456817 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-092353-m03
	I1002 06:27:17.415390  456817 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33174 SSHKeyPath:/home/jenkins/minikube-integration/21643-375701/.minikube/machines/ha-092353-m03/id_rsa Username:docker}
	I1002 06:27:17.512768  456817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 06:27:17.525796  456817 kubeconfig.go:125] found "ha-092353" server: "https://192.168.49.254:8443"
	I1002 06:27:17.525821  456817 api_server.go:166] Checking apiserver status ...
	I1002 06:27:17.525848  456817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:27:17.536822  456817 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1315/cgroup
	W1002 06:27:17.544631  456817 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1315/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1002 06:27:17.544678  456817 ssh_runner.go:195] Run: ls
	I1002 06:27:17.548084  456817 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1002 06:27:17.551994  456817 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1002 06:27:17.552014  456817 status.go:463] ha-092353-m03 apiserver status = Running (err=<nil>)
	I1002 06:27:17.552022  456817 status.go:176] ha-092353-m03 status: &{Name:ha-092353-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 06:27:17.552047  456817 status.go:174] checking status of ha-092353-m04 ...
	I1002 06:27:17.552268  456817 cli_runner.go:164] Run: docker container inspect ha-092353-m04 --format={{.State.Status}}
	I1002 06:27:17.569444  456817 status.go:371] ha-092353-m04 host status = "Running" (err=<nil>)
	I1002 06:27:17.569467  456817 host.go:66] Checking if "ha-092353-m04" exists ...
	I1002 06:27:17.569663  456817 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-092353-m04
	I1002 06:27:17.587712  456817 host.go:66] Checking if "ha-092353-m04" exists ...
	I1002 06:27:17.588017  456817 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 06:27:17.588061  456817 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-092353-m04
	I1002 06:27:17.606108  456817 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/21643-375701/.minikube/machines/ha-092353-m04/id_rsa Username:docker}
	I1002 06:27:17.704548  456817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 06:27:17.716587  456817 status.go:176] ha-092353-m04 status: &{Name:ha-092353-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (9.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-092353 node start m02 --alsologtostderr -v 5: (8.103592159s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (9.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (93.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-092353 stop --alsologtostderr -v 5: (36.847281554s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 start --wait true --alsologtostderr -v 5
E1002 06:28:12.516857  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/addons-304257/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:28:32.044012  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/functional-199910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:28:32.050429  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/functional-199910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:28:32.062643  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/functional-199910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:28:32.084184  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/functional-199910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:28:32.125960  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/functional-199910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:28:32.208061  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/functional-199910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:28:32.369469  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/functional-199910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:28:32.691482  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/functional-199910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:28:33.333263  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/functional-199910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:28:34.615045  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/functional-199910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:28:37.176493  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/functional-199910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:28:42.298774  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/functional-199910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:28:52.540940  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/functional-199910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-092353 start --wait true --alsologtostderr -v 5: (56.214799047s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (93.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-092353 node delete m03 --alsologtostderr -v 5: (8.264601306s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 stop --alsologtostderr -v 5
E1002 06:29:13.023102  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/functional-199910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:29:35.580049  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/addons-304257/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-092353 stop --alsologtostderr -v 5: (35.599753262s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-092353 status --alsologtostderr -v 5: exit status 7 (105.9765ms)

                                                
                                                
-- stdout --
	ha-092353
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-092353-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-092353-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 06:29:46.867090  473206 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:29:46.867347  473206 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:29:46.867355  473206 out.go:374] Setting ErrFile to fd 2...
	I1002 06:29:46.867358  473206 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:29:46.867570  473206 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-375701/.minikube/bin
	I1002 06:29:46.867730  473206 out.go:368] Setting JSON to false
	I1002 06:29:46.867766  473206 mustload.go:65] Loading cluster: ha-092353
	I1002 06:29:46.867842  473206 notify.go:220] Checking for updates...
	I1002 06:29:46.868140  473206 config.go:182] Loaded profile config "ha-092353": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1002 06:29:46.868155  473206 status.go:174] checking status of ha-092353 ...
	I1002 06:29:46.868594  473206 cli_runner.go:164] Run: docker container inspect ha-092353 --format={{.State.Status}}
	I1002 06:29:46.888708  473206 status.go:371] ha-092353 host status = "Stopped" (err=<nil>)
	I1002 06:29:46.888766  473206 status.go:384] host is not running, skipping remaining checks
	I1002 06:29:46.888780  473206 status.go:176] ha-092353 status: &{Name:ha-092353 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 06:29:46.888818  473206 status.go:174] checking status of ha-092353-m02 ...
	I1002 06:29:46.889227  473206 cli_runner.go:164] Run: docker container inspect ha-092353-m02 --format={{.State.Status}}
	I1002 06:29:46.908231  473206 status.go:371] ha-092353-m02 host status = "Stopped" (err=<nil>)
	I1002 06:29:46.908251  473206 status.go:384] host is not running, skipping remaining checks
	I1002 06:29:46.908257  473206 status.go:176] ha-092353-m02 status: &{Name:ha-092353-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 06:29:46.908276  473206 status.go:174] checking status of ha-092353-m04 ...
	I1002 06:29:46.908521  473206 cli_runner.go:164] Run: docker container inspect ha-092353-m04 --format={{.State.Status}}
	I1002 06:29:46.925478  473206 status.go:371] ha-092353-m04 host status = "Stopped" (err=<nil>)
	I1002 06:29:46.925541  473206 status.go:384] host is not running, skipping remaining checks
	I1002 06:29:46.925551  473206 status.go:176] ha-092353-m04 status: &{Name:ha-092353-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (55.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1002 06:29:53.984474  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/functional-199910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-092353 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (55.129443308s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (55.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (39.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 node add --control-plane --alsologtostderr -v 5
E1002 06:31:15.905860  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/functional-199910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-092353 node add --control-plane --alsologtostderr -v 5: (39.010503447s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-092353 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (39.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.90s)

                                                
                                    
x
+
TestJSONOutput/start/Command (37.8s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-112640 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-112640 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (37.794527652s)
--- PASS: TestJSONOutput/start/Command (37.80s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-112640 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.56s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-112640 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.56s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.7s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-112640 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-112640 --output=json --user=testUser: (5.704249548s)
--- PASS: TestJSONOutput/stop/Command (5.70s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-896302 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-896302 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (65.864129ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2fda000f-843b-4b8a-b31d-789e7fb7d5e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-896302] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"da4fc17e-5d2d-41ca-8f2f-297c033b72b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21643"}}
	{"specversion":"1.0","id":"6771944d-4a82-4118-80dc-23174d04a813","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c1e3fa98-48bc-4dfa-a565-e9680fb3281d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21643-375701/kubeconfig"}}
	{"specversion":"1.0","id":"9806ee60-871d-44d1-a0d8-c28475592771","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-375701/.minikube"}}
	{"specversion":"1.0","id":"f33a0b54-9781-4c83-94e9-0fd034be9923","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"5e3f6c45-c495-419c-a784-9623afbb9775","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5f055e54-0438-49b5-81f9-c3010f34dd52","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-896302" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-896302
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (33.12s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-161916 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-161916 --network=: (31.047284268s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-161916" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-161916
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-161916: (2.051325188s)
--- PASS: TestKicCustomNetwork/create_custom_network (33.12s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.45s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-661520 --network=bridge
E1002 06:33:12.517775  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/addons-304257/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-661520 --network=bridge: (21.533276182s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-661520" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-661520
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-661520: (1.891744796s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.45s)

                                                
                                    
x
+
TestKicExistingNetwork (23.42s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1002 06:33:18.138998  379278 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1002 06:33:18.154538  379278 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1002 06:33:18.154621  379278 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1002 06:33:18.154642  379278 cli_runner.go:164] Run: docker network inspect existing-network
W1002 06:33:18.169640  379278 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1002 06:33:18.169666  379278 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1002 06:33:18.169678  379278 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1002 06:33:18.169782  379278 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1002 06:33:18.185641  379278 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d5058a9c06ff IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:2a:f5:54:71:5c:8b} reservation:<nil>}
I1002 06:33:18.186060  379278 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000013640}
I1002 06:33:18.186099  379278 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1002 06:33:18.186158  379278 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1002 06:33:18.239285  379278 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-308694 --network=existing-network
E1002 06:33:32.051096  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/functional-199910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-308694 --network=existing-network: (21.407358081s)
helpers_test.go:175: Cleaning up "existing-network-308694" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-308694
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-308694: (1.879382544s)
I1002 06:33:41.542364  379278 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (23.42s)

                                                
                                    
x
+
TestKicCustomSubnet (23.14s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-749344 --subnet=192.168.60.0/24
E1002 06:33:59.754062  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/functional-199910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-749344 --subnet=192.168.60.0/24: (21.057693503s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-749344 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-749344" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-749344
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-749344: (2.063394597s)
--- PASS: TestKicCustomSubnet (23.14s)

                                                
                                    
x
+
TestKicStaticIP (26.45s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-545845 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-545845 --static-ip=192.168.200.200: (24.280603393s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-545845 ip
helpers_test.go:175: Cleaning up "static-ip-545845" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-545845
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-545845: (2.033893891s)
--- PASS: TestKicStaticIP (26.45s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (46.09s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-599394 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-599394 --driver=docker  --container-runtime=containerd: (20.502867952s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-610147 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-610147 --driver=docker  --container-runtime=containerd: (19.896700499s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-599394
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-610147
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-610147" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-610147
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-610147: (2.237584393s)
helpers_test.go:175: Cleaning up "first-599394" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-599394
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-599394: (2.294833495s)
--- PASS: TestMinikubeProfile (46.09s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.92s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-974078 --memory=3072 --mount-string /tmp/TestMountStartserial1350646/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-974078 --memory=3072 --mount-string /tmp/TestMountStartserial1350646/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.919091162s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.92s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-974078 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.66s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-986720 --memory=3072 --mount-string /tmp/TestMountStartserial1350646/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-986720 --memory=3072 --mount-string /tmp/TestMountStartserial1350646/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.655091063s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-986720 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-974078 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-974078 --alsologtostderr -v=5: (1.634214361s)
--- PASS: TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-986720 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-986720
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-986720: (1.185137371s)
--- PASS: TestMountStart/serial/Stop (1.19s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.05s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-986720
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-986720: (6.053039624s)
--- PASS: TestMountStart/serial/RestartStopped (7.05s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-986720 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (66.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-296660 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-296660 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m5.632311654s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296660 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (66.11s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-296660 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-296660 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-296660 -- rollout status deployment/busybox: (2.913017187s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-296660 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-296660 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-296660 -- exec busybox-7b57f96db7-4d7hv -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-296660 -- exec busybox-7b57f96db7-5ddx4 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-296660 -- exec busybox-7b57f96db7-4d7hv -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-296660 -- exec busybox-7b57f96db7-5ddx4 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-296660 -- exec busybox-7b57f96db7-4d7hv -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-296660 -- exec busybox-7b57f96db7-5ddx4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.27s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-296660 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-296660 -- exec busybox-7b57f96db7-4d7hv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-296660 -- exec busybox-7b57f96db7-4d7hv -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-296660 -- exec busybox-7b57f96db7-5ddx4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-296660 -- exec busybox-7b57f96db7-5ddx4 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (23.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-296660 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-296660 -v=5 --alsologtostderr: (22.996242779s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296660 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (23.62s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-296660 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.64s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296660 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296660 cp testdata/cp-test.txt multinode-296660:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296660 ssh -n multinode-296660 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296660 cp multinode-296660:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile191385672/001/cp-test_multinode-296660.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296660 ssh -n multinode-296660 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296660 cp multinode-296660:/home/docker/cp-test.txt multinode-296660-m02:/home/docker/cp-test_multinode-296660_multinode-296660-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296660 ssh -n multinode-296660 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296660 ssh -n multinode-296660-m02 "sudo cat /home/docker/cp-test_multinode-296660_multinode-296660-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296660 cp multinode-296660:/home/docker/cp-test.txt multinode-296660-m03:/home/docker/cp-test_multinode-296660_multinode-296660-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296660 ssh -n multinode-296660 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296660 ssh -n multinode-296660-m03 "sudo cat /home/docker/cp-test_multinode-296660_multinode-296660-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296660 cp testdata/cp-test.txt multinode-296660-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296660 ssh -n multinode-296660-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296660 cp multinode-296660-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile191385672/001/cp-test_multinode-296660-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296660 ssh -n multinode-296660-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296660 cp multinode-296660-m02:/home/docker/cp-test.txt multinode-296660:/home/docker/cp-test_multinode-296660-m02_multinode-296660.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296660 ssh -n multinode-296660-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296660 ssh -n multinode-296660 "sudo cat /home/docker/cp-test_multinode-296660-m02_multinode-296660.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296660 cp multinode-296660-m02:/home/docker/cp-test.txt multinode-296660-m03:/home/docker/cp-test_multinode-296660-m02_multinode-296660-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296660 ssh -n multinode-296660-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296660 ssh -n multinode-296660-m03 "sudo cat /home/docker/cp-test_multinode-296660-m02_multinode-296660-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296660 cp testdata/cp-test.txt multinode-296660-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296660 ssh -n multinode-296660-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296660 cp multinode-296660-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile191385672/001/cp-test_multinode-296660-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296660 ssh -n multinode-296660-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296660 cp multinode-296660-m03:/home/docker/cp-test.txt multinode-296660:/home/docker/cp-test_multinode-296660-m03_multinode-296660.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296660 ssh -n multinode-296660-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296660 ssh -n multinode-296660 "sudo cat /home/docker/cp-test_multinode-296660-m03_multinode-296660.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296660 cp multinode-296660-m03:/home/docker/cp-test.txt multinode-296660-m02:/home/docker/cp-test_multinode-296660-m03_multinode-296660-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296660 ssh -n multinode-296660-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296660 ssh -n multinode-296660-m02 "sudo cat /home/docker/cp-test_multinode-296660-m03_multinode-296660-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.27s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296660 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-296660 node stop m03: (1.228986503s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296660 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-296660 status: exit status 7 (477.342969ms)

                                                
                                                
-- stdout --
	multinode-296660
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-296660-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-296660-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296660 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-296660 status --alsologtostderr: exit status 7 (479.021854ms)

                                                
                                                
-- stdout --
	multinode-296660
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-296660-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-296660-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 06:37:27.941896  535498 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:37:27.942154  535498 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:37:27.942164  535498 out.go:374] Setting ErrFile to fd 2...
	I1002 06:37:27.942168  535498 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:37:27.942395  535498 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-375701/.minikube/bin
	I1002 06:37:27.942620  535498 out.go:368] Setting JSON to false
	I1002 06:37:27.942653  535498 mustload.go:65] Loading cluster: multinode-296660
	I1002 06:37:27.942690  535498 notify.go:220] Checking for updates...
	I1002 06:37:27.943048  535498 config.go:182] Loaded profile config "multinode-296660": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1002 06:37:27.943063  535498 status.go:174] checking status of multinode-296660 ...
	I1002 06:37:27.943550  535498 cli_runner.go:164] Run: docker container inspect multinode-296660 --format={{.State.Status}}
	I1002 06:37:27.964070  535498 status.go:371] multinode-296660 host status = "Running" (err=<nil>)
	I1002 06:37:27.964120  535498 host.go:66] Checking if "multinode-296660" exists ...
	I1002 06:37:27.964405  535498 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-296660
	I1002 06:37:27.981465  535498 host.go:66] Checking if "multinode-296660" exists ...
	I1002 06:37:27.981691  535498 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 06:37:27.981727  535498 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-296660
	I1002 06:37:27.998569  535498 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33284 SSHKeyPath:/home/jenkins/minikube-integration/21643-375701/.minikube/machines/multinode-296660/id_rsa Username:docker}
	I1002 06:37:28.097906  535498 ssh_runner.go:195] Run: systemctl --version
	I1002 06:37:28.104052  535498 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 06:37:28.115571  535498 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:37:28.166645  535498 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-10-02 06:37:28.157034377 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:37:28.167286  535498 kubeconfig.go:125] found "multinode-296660" server: "https://192.168.67.2:8443"
	I1002 06:37:28.167320  535498 api_server.go:166] Checking apiserver status ...
	I1002 06:37:28.167354  535498 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:28.178904  535498 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1305/cgroup
	W1002 06:37:28.186667  535498 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1305/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1002 06:37:28.186725  535498 ssh_runner.go:195] Run: ls
	I1002 06:37:28.190147  535498 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1002 06:37:28.195078  535498 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1002 06:37:28.195103  535498 status.go:463] multinode-296660 apiserver status = Running (err=<nil>)
	I1002 06:37:28.195115  535498 status.go:176] multinode-296660 status: &{Name:multinode-296660 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 06:37:28.195142  535498 status.go:174] checking status of multinode-296660-m02 ...
	I1002 06:37:28.195446  535498 cli_runner.go:164] Run: docker container inspect multinode-296660-m02 --format={{.State.Status}}
	I1002 06:37:28.211989  535498 status.go:371] multinode-296660-m02 host status = "Running" (err=<nil>)
	I1002 06:37:28.212007  535498 host.go:66] Checking if "multinode-296660-m02" exists ...
	I1002 06:37:28.212255  535498 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-296660-m02
	I1002 06:37:28.228296  535498 host.go:66] Checking if "multinode-296660-m02" exists ...
	I1002 06:37:28.228521  535498 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 06:37:28.228553  535498 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-296660-m02
	I1002 06:37:28.245352  535498 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33289 SSHKeyPath:/home/jenkins/minikube-integration/21643-375701/.minikube/machines/multinode-296660-m02/id_rsa Username:docker}
	I1002 06:37:28.342632  535498 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 06:37:28.354456  535498 status.go:176] multinode-296660-m02 status: &{Name:multinode-296660-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1002 06:37:28.354490  535498 status.go:174] checking status of multinode-296660-m03 ...
	I1002 06:37:28.354823  535498 cli_runner.go:164] Run: docker container inspect multinode-296660-m03 --format={{.State.Status}}
	I1002 06:37:28.372291  535498 status.go:371] multinode-296660-m03 host status = "Stopped" (err=<nil>)
	I1002 06:37:28.372311  535498 status.go:384] host is not running, skipping remaining checks
	I1002 06:37:28.372317  535498 status.go:176] multinode-296660-m03 status: &{Name:multinode-296660-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.19s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (6.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296660 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-296660 node start m03 -v=5 --alsologtostderr: (6.041800157s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296660 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (6.71s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (69.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-296660
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-296660
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-296660: (24.750936761s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-296660 --wait=true -v=5 --alsologtostderr
E1002 06:38:12.516840  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/addons-304257/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:38:32.044193  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/functional-199910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-296660 --wait=true -v=5 --alsologtostderr: (44.867513786s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-296660
--- PASS: TestMultiNode/serial/RestartKeepsNodes (69.72s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296660 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-296660 node delete m03: (4.48057999s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296660 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.05s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296660 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-296660 stop: (23.627539897s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296660 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-296660 status: exit status 7 (82.16934ms)

                                                
                                                
-- stdout --
	multinode-296660
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-296660-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296660 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-296660 status --alsologtostderr: exit status 7 (83.317098ms)

                                                
                                                
-- stdout --
	multinode-296660
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-296660-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 06:39:13.614650  545270 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:39:13.614935  545270 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:39:13.614943  545270 out.go:374] Setting ErrFile to fd 2...
	I1002 06:39:13.614947  545270 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:39:13.615109  545270 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-375701/.minikube/bin
	I1002 06:39:13.615280  545270 out.go:368] Setting JSON to false
	I1002 06:39:13.615307  545270 mustload.go:65] Loading cluster: multinode-296660
	I1002 06:39:13.615355  545270 notify.go:220] Checking for updates...
	I1002 06:39:13.615632  545270 config.go:182] Loaded profile config "multinode-296660": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1002 06:39:13.615646  545270 status.go:174] checking status of multinode-296660 ...
	I1002 06:39:13.616042  545270 cli_runner.go:164] Run: docker container inspect multinode-296660 --format={{.State.Status}}
	I1002 06:39:13.632937  545270 status.go:371] multinode-296660 host status = "Stopped" (err=<nil>)
	I1002 06:39:13.632961  545270 status.go:384] host is not running, skipping remaining checks
	I1002 06:39:13.632969  545270 status.go:176] multinode-296660 status: &{Name:multinode-296660 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 06:39:13.633006  545270 status.go:174] checking status of multinode-296660-m02 ...
	I1002 06:39:13.633233  545270 cli_runner.go:164] Run: docker container inspect multinode-296660-m02 --format={{.State.Status}}
	I1002 06:39:13.649853  545270 status.go:371] multinode-296660-m02 host status = "Stopped" (err=<nil>)
	I1002 06:39:13.649869  545270 status.go:384] host is not running, skipping remaining checks
	I1002 06:39:13.649876  545270 status.go:176] multinode-296660-m02 status: &{Name:multinode-296660-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.79s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (50.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-296660 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-296660 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (49.448800781s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-296660 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (50.03s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (21.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-296660
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-296660-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-296660-m02 --driver=docker  --container-runtime=containerd: exit status 14 (60.843143ms)

                                                
                                                
-- stdout --
	* [multinode-296660-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21643-375701/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-375701/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-296660-m02' is duplicated with machine name 'multinode-296660-m02' in profile 'multinode-296660'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-296660-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-296660-m03 --driver=docker  --container-runtime=containerd: (19.254954105s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-296660
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-296660: exit status 80 (273.789595ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-296660 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-296660-m03 already exists in multinode-296660-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-296660-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-296660-m03: (1.89020242s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (21.53s)

                                                
                                    
x
+
TestPreload (111.54s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-452083 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-452083 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (44.53535221s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-452083 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-452083 image pull gcr.io/k8s-minikube/busybox: (2.195657984s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-452083
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-452083: (6.582256449s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-452083 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-452083 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (55.652123638s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-452083 image list
helpers_test.go:175: Cleaning up "test-preload-452083" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-452083
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-452083: (2.355387369s)
--- PASS: TestPreload (111.54s)

                                                
                                    
x
+
TestScheduledStopUnix (95.53s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-601382 --memory=3072 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-601382 --memory=3072 --driver=docker  --container-runtime=containerd: (19.744477144s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-601382 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-601382 -n scheduled-stop-601382
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-601382 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1002 06:42:40.905728  379278 retry.go:31] will retry after 68.163µs: open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/scheduled-stop-601382/pid: no such file or directory
I1002 06:42:40.906905  379278 retry.go:31] will retry after 176.51µs: open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/scheduled-stop-601382/pid: no such file or directory
I1002 06:42:40.908030  379278 retry.go:31] will retry after 240.76µs: open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/scheduled-stop-601382/pid: no such file or directory
I1002 06:42:40.909175  379278 retry.go:31] will retry after 478.001µs: open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/scheduled-stop-601382/pid: no such file or directory
I1002 06:42:40.910305  379278 retry.go:31] will retry after 647.123µs: open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/scheduled-stop-601382/pid: no such file or directory
I1002 06:42:40.911453  379278 retry.go:31] will retry after 808.573µs: open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/scheduled-stop-601382/pid: no such file or directory
I1002 06:42:40.912610  379278 retry.go:31] will retry after 1.016352ms: open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/scheduled-stop-601382/pid: no such file or directory
I1002 06:42:40.913767  379278 retry.go:31] will retry after 1.601599ms: open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/scheduled-stop-601382/pid: no such file or directory
I1002 06:42:40.915981  379278 retry.go:31] will retry after 3.300372ms: open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/scheduled-stop-601382/pid: no such file or directory
I1002 06:42:40.920183  379278 retry.go:31] will retry after 2.928842ms: open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/scheduled-stop-601382/pid: no such file or directory
I1002 06:42:40.923380  379278 retry.go:31] will retry after 3.712704ms: open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/scheduled-stop-601382/pid: no such file or directory
I1002 06:42:40.927579  379278 retry.go:31] will retry after 12.362296ms: open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/scheduled-stop-601382/pid: no such file or directory
I1002 06:42:40.940815  379278 retry.go:31] will retry after 12.626978ms: open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/scheduled-stop-601382/pid: no such file or directory
I1002 06:42:40.954036  379278 retry.go:31] will retry after 28.163821ms: open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/scheduled-stop-601382/pid: no such file or directory
I1002 06:42:40.983267  379278 retry.go:31] will retry after 24.251363ms: open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/scheduled-stop-601382/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-601382 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-601382 -n scheduled-stop-601382
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-601382
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-601382 --schedule 15s
E1002 06:43:12.517461  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/addons-304257/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1002 06:43:32.052402  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/functional-199910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-601382
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-601382: exit status 7 (66.583439ms)

                                                
                                                
-- stdout --
	scheduled-stop-601382
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-601382 -n scheduled-stop-601382
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-601382 -n scheduled-stop-601382: exit status 7 (63.991682ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-601382" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-601382
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-601382: (4.448982013s)
--- PASS: TestScheduledStopUnix (95.53s)

                                                
                                    
x
+
TestInsufficientStorage (11.95s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-912595 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-912595 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (9.580621382s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"574c8660-a6da-430f-9aa2-950c454c55a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-912595] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"74aa19ca-b213-4904-89dc-f79ae0809d2a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21643"}}
	{"specversion":"1.0","id":"6c31c95a-67c2-46bf-b31e-d556c3f3ad1e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5bb7e75c-46a8-4103-bc7b-86093c9f9844","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21643-375701/kubeconfig"}}
	{"specversion":"1.0","id":"e25f347d-b20c-4b0c-8689-70b9c47becf8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-375701/.minikube"}}
	{"specversion":"1.0","id":"82103750-9511-4be9-8c7f-4a4da6778994","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"19c702e5-47a2-4776-bc1c-243a3a4b7785","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5bb467e0-55e3-460f-8f64-21acea9b6acd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"0d60cfa5-fe38-4b0a-a497-5e2aebbe05bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"409d05c3-c79d-460d-a21e-0b0a3e926a4e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"21864d33-92bb-4341-9099-17ebb3bce68e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"a5faf0ed-402d-4069-bdb0-2506f47798b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-912595\" primary control-plane node in \"insufficient-storage-912595\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"07354b07-306b-47b4-b13c-9a66d1c50e63","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1759382731-21643 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"04ee454a-23a8-4f12-8e12-f6e57380a544","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"4696a134-2154-4d4c-8b9a-f6209c3bc332","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-912595 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-912595 --output=json --layout=cluster: exit status 7 (266.236049ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-912595","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-912595","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 06:44:06.127484  567089 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-912595" does not appear in /home/jenkins/minikube-integration/21643-375701/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-912595 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-912595 --output=json --layout=cluster: exit status 7 (271.061977ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-912595","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-912595","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 06:44:06.399437  567195 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-912595" does not appear in /home/jenkins/minikube-integration/21643-375701/kubeconfig
	E1002 06:44:06.409410  567195 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/insufficient-storage-912595/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-912595" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-912595
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-912595: (1.835610175s)
--- PASS: TestInsufficientStorage (11.95s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (45.36s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.3732878688 start -p running-upgrade-879201 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.3732878688 start -p running-upgrade-879201 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (19.894464821s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-879201 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-879201 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (20.510385069s)
helpers_test.go:175: Cleaning up "running-upgrade-879201" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-879201
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-879201: (2.14338944s)
--- PASS: TestRunningBinaryUpgrade (45.36s)

                                                
                                    
x
+
TestKubernetesUpgrade (314.79s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-125086 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-125086 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (19.758881007s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-125086
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-125086: (3.353649256s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-125086 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-125086 status --format={{.Host}}: exit status 7 (74.966208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-125086 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-125086 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m33.388272017s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-125086 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-125086 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-125086 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 106 (78.532681ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-125086] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21643-375701/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-375701/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-125086
	    minikube start -p kubernetes-upgrade-125086 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1250862 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-125086 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-125086 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-125086 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (15.782200134s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-125086" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-125086
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-125086: (2.284514469s)
--- PASS: TestKubernetesUpgrade (314.79s)

                                                
                                    
x
+
TestMissingContainerUpgrade (116.3s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.1705420937 start -p missing-upgrade-884574 --memory=3072 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.1705420937 start -p missing-upgrade-884574 --memory=3072 --driver=docker  --container-runtime=containerd: (50.554562696s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-884574
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-884574
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-884574 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-884574 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (58.089964136s)
helpers_test.go:175: Cleaning up "missing-upgrade-884574" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-884574
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-884574: (4.152752966s)
--- PASS: TestMissingContainerUpgrade (116.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.63s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-584508 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-584508 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (66.345215ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-584508] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21643-375701/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-375701/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (35.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-584508 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-584508 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (35.054089154s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-584508 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (35.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (96.9s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.1420312204 start -p stopped-upgrade-589097 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.1420312204 start -p stopped-upgrade-589097 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (51.24383641s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.1420312204 -p stopped-upgrade-589097 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.1420312204 -p stopped-upgrade-589097 stop: (4.221057789s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-589097 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-589097 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (41.436484348s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (96.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (26.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-584508 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E1002 06:44:55.116174  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/functional-199910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-584508 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (24.131113316s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-584508 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-584508 status -o json: exit status 2 (287.829123ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-584508","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-584508
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-584508: (1.924163795s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (26.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (4.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-584508 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-584508 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (4.732860252s)
--- PASS: TestNoKubernetes/serial/Start (4.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-584508 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-584508 "sudo systemctl is-active --quiet service kubelet": exit status 1 (261.251135ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (30.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:171: (dbg) Done: out/minikube-linux-amd64 profile list: (16.335635553s)
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:181: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (13.807943495s)
--- PASS: TestNoKubernetes/serial/ProfileList (30.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-584508
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-584508: (1.214875837s)
--- PASS: TestNoKubernetes/serial/Stop (1.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-584508 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-584508 --driver=docker  --container-runtime=containerd: (6.889433706s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.89s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.03s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-589097
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-589097: (1.031269946s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.03s)

                                                
                                    
x
+
TestPause/serial/Start (40.75s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-827783 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-827783 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (40.750452736s)
--- PASS: TestPause/serial/Start (40.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-584508 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-584508 "sudo systemctl is-active --quiet service kubelet": exit status 1 (267.833407ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.16s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-827783 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-827783 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.146089959s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-271519 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-271519 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (178.336686ms)

                                                
                                                
-- stdout --
	* [false-271519] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21643-375701/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-375701/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 06:46:35.439637  607774 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:46:35.440105  607774 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:46:35.440118  607774 out.go:374] Setting ErrFile to fd 2...
	I1002 06:46:35.440124  607774 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:46:35.442455  607774 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-375701/.minikube/bin
	I1002 06:46:35.443108  607774 out.go:368] Setting JSON to false
	I1002 06:46:35.444291  607774 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":8938,"bootTime":1759378657,"procs":305,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 06:46:35.444386  607774 start.go:140] virtualization: kvm guest
	I1002 06:46:35.446044  607774 out.go:179] * [false-271519] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 06:46:35.447289  607774 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 06:46:35.447354  607774 notify.go:220] Checking for updates...
	I1002 06:46:35.449393  607774 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 06:46:35.450590  607774 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-375701/kubeconfig
	I1002 06:46:35.451726  607774 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-375701/.minikube
	I1002 06:46:35.453835  607774 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 06:46:35.455019  607774 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 06:46:35.456639  607774 config.go:182] Loaded profile config "kubernetes-upgrade-125086": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1002 06:46:35.456832  607774 config.go:182] Loaded profile config "pause-827783": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1002 06:46:35.456970  607774 config.go:182] Loaded profile config "running-upgrade-879201": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1002 06:46:35.457073  607774 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 06:46:35.487010  607774 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I1002 06:46:35.487152  607774 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:46:35.553249  607774 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-02 06:46:35.540523027 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 06:46:35.553409  607774 docker.go:318] overlay module found
	I1002 06:46:35.555000  607774 out.go:179] * Using the docker driver based on user configuration
	I1002 06:46:35.556161  607774 start.go:304] selected driver: docker
	I1002 06:46:35.556180  607774 start.go:924] validating driver "docker" against <nil>
	I1002 06:46:35.556194  607774 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 06:46:35.557990  607774 out.go:203] 
	W1002 06:46:35.559013  607774 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1002 06:46:35.560033  607774 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-271519 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-271519

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-271519

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-271519

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-271519

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-271519

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-271519

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-271519

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-271519

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-271519

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-271519

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-271519"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-271519"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-271519"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-271519

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-271519"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-271519"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-271519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-271519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-271519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-271519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-271519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-271519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-271519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-271519" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-271519"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-271519"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-271519"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-271519"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-271519"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-271519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-271519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-271519" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-271519"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-271519"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-271519"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-271519"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-271519"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21643-375701/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 02 Oct 2025 06:45:37 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-125086
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21643-375701/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 02 Oct 2025 06:46:35 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: pause-827783
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21643-375701/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 02 Oct 2025 06:46:22 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: running-upgrade-879201
contexts:
- context:
cluster: kubernetes-upgrade-125086
user: kubernetes-upgrade-125086
name: kubernetes-upgrade-125086
- context:
cluster: pause-827783
extensions:
- extension:
last-update: Thu, 02 Oct 2025 06:46:35 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-827783
name: pause-827783
- context:
cluster: running-upgrade-879201
user: running-upgrade-879201
name: running-upgrade-879201
current-context: pause-827783
kind: Config
users:
- name: kubernetes-upgrade-125086
user:
client-certificate: /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/kubernetes-upgrade-125086/client.crt
client-key: /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/kubernetes-upgrade-125086/client.key
- name: pause-827783
user:
client-certificate: /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/pause-827783/client.crt
client-key: /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/pause-827783/client.key
- name: running-upgrade-879201
user:
client-certificate: /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/running-upgrade-879201/client.crt
client-key: /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/running-upgrade-879201/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-271519

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-271519"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-271519"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-271519"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-271519"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-271519"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-271519"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-271519"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-271519"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-271519"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-271519"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-271519"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-271519"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-271519"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-271519"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-271519"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-271519"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-271519"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-271519"

                                                
                                                
----------------------- debugLogs end: false-271519 [took: 3.18925076s] --------------------------------
helpers_test.go:175: Cleaning up "false-271519" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-271519
--- PASS: TestNetworkPlugins/group/false (3.57s)

                                                
                                    
x
+
TestPause/serial/Pause (0.73s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-827783 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.73s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.36s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-827783 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-827783 --output=json --layout=cluster: exit status 2 (363.134718ms)

                                                
                                                
-- stdout --
	{"Name":"pause-827783","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-827783","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.36s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.72s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-827783 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.72s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.8s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-827783 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.80s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.73s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-827783 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-827783 --alsologtostderr -v=5: (2.732861517s)
--- PASS: TestPause/serial/DeletePaused (2.73s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.49s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-827783
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-827783: exit status 1 (18.997471ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-827783: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (48.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-865121 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-865121 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (48.907054791s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (48.91s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (41.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-670569 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-670569 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (41.888412823s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (41.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-670569 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [34019699-cd02-4fb3-b1ac-be69512ea195] Pending
helpers_test.go:352: "busybox" [34019699-cd02-4fb3-b1ac-be69512ea195] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [34019699-cd02-4fb3-b1ac-be69512ea195] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004139943s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-670569 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-865121 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [d55b1d6d-a9be-4fef-a5ce-4df9eea6d988] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [d55b1d6d-a9be-4fef-a5ce-4df9eea6d988] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.002910757s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-865121 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.79s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-670569 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-670569 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.79s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.91s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-670569 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-670569 --alsologtostderr -v=3: (11.911757583s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-865121 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-865121 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-865121 --alsologtostderr -v=3
E1002 06:48:12.517232  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/addons-304257/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-865121 --alsologtostderr -v=3: (11.893406618s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-670569 -n embed-certs-670569
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-670569 -n embed-certs-670569: exit status 7 (82.083531ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-670569 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (51.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-670569 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-670569 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (50.73448329s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-670569 -n embed-certs-670569
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (51.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-865121 -n old-k8s-version-865121
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-865121 -n old-k8s-version-865121: exit status 7 (75.18346ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-865121 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (44.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-865121 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
E1002 06:48:32.044077  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/functional-199910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-865121 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (43.918028903s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-865121 -n old-k8s-version-865121
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (44.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-9ccw2" [a56c33f0-cb8d-447e-b212-42b7db0ba59f] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003257481s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-ldwk7" [6233811e-b6e4-4788-a112-582aa0df7666] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003015326s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-9ccw2" [a56c33f0-cb8d-447e-b212-42b7db0ba59f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003316849s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-865121 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-ldwk7" [6233811e-b6e4-4788-a112-582aa0df7666] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003439377s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-670569 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-865121 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.71s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-865121 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-865121 -n old-k8s-version-865121
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-865121 -n old-k8s-version-865121: exit status 2 (304.395259ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-865121 -n old-k8s-version-865121
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-865121 -n old-k8s-version-865121: exit status 2 (316.366471ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-865121 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-865121 -n old-k8s-version-865121
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-865121 -n old-k8s-version-865121
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.71s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-670569 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.85s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-670569 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-670569 -n embed-certs-670569
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-670569 -n embed-certs-670569: exit status 2 (314.590136ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-670569 -n embed-certs-670569
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-670569 -n embed-certs-670569: exit status 2 (315.973297ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-670569 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-670569 -n embed-certs-670569
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-670569 -n embed-certs-670569
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (50.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-065310 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-065310 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (50.329833865s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (50.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (43.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-234329 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-234329 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (43.526993129s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (43.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-234329 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [8faed9b1-c8d4-4898-a34f-2d56339d572c] Pending
helpers_test.go:352: "busybox" [8faed9b1-c8d4-4898-a34f-2d56339d572c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [8faed9b1-c8d4-4898-a34f-2d56339d572c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004525281s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-234329 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-065310 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [a89d5b11-7753-40d5-b773-ae05320dc1bd] Pending
helpers_test.go:352: "busybox" [a89d5b11-7753-40d5-b773-ae05320dc1bd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [a89d5b11-7753-40d5-b773-ae05320dc1bd] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004460495s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-065310 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (26.8s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-227946 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-227946 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (26.798000998s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (26.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-234329 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-234329 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-234329 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-234329 --alsologtostderr -v=3: (12.101776557s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (42.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-271519 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-271519 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (42.717421559s)
--- PASS: TestNetworkPlugins/group/auto/Start (42.72s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-065310 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-065310 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-065310 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-065310 --alsologtostderr -v=3: (12.00685736s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-234329 -n default-k8s-diff-port-234329
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-234329 -n default-k8s-diff-port-234329: exit status 7 (72.632958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-234329 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (44.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-234329 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-234329 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (44.581620538s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-234329 -n default-k8s-diff-port-234329
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (44.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-065310 -n no-preload-065310
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-065310 -n no-preload-065310: exit status 7 (117.796822ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-065310 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (55.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-065310 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-065310 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (55.07062371s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-065310 -n no-preload-065310
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (55.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.8s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-227946 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.80s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.04s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-227946 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-227946 --alsologtostderr -v=3: (2.035272302s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-227946 -n newest-cni-227946
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-227946 -n newest-cni-227946: exit status 7 (137.022678ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-227946 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (11.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-227946 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-227946 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (11.513611062s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-227946 -n newest-cni-227946
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (11.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-227946 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.9s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-227946 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-227946 -n newest-cni-227946
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-227946 -n newest-cni-227946: exit status 2 (348.324309ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-227946 -n newest-cni-227946
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-227946 -n newest-cni-227946: exit status 2 (329.22184ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-227946 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-227946 -n newest-cni-227946
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-227946 -n newest-cni-227946
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-271519 "pgrep -a kubelet"
I1002 06:51:00.327982  379278 config.go:182] Loaded profile config "auto-271519": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-271519 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-x9kcq" [2dc44da1-23e5-4ecc-8076-7a8e2958ed9a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-x9kcq" [2dc44da1-23e5-4ecc-8076-7a8e2958ed9a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.047151824s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (39.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-271519 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-271519 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (39.062421094s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (39.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-271519 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-271519 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-271519 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-slfkp" [ebedea20-9694-4ee2-96f2-3c7aad7515b8] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.006058465s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-slfkp" [ebedea20-9694-4ee2-96f2-3c7aad7515b8] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003995994s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-234329 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-234329 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-234329 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-234329 -n default-k8s-diff-port-234329
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-234329 -n default-k8s-diff-port-234329: exit status 2 (329.662743ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-234329 -n default-k8s-diff-port-234329
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-234329 -n default-k8s-diff-port-234329: exit status 2 (316.91488ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-234329 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-234329 -n default-k8s-diff-port-234329
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-234329 -n default-k8s-diff-port-234329
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (43.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-271519 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-271519 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (43.870512521s)
--- PASS: TestNetworkPlugins/group/calico/Start (43.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-b45f6" [169a8c01-52f9-4945-816f-216fae735aa8] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.364772369s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (54.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-271519 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-271519 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (54.390371694s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (54.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-b45f6" [169a8c01-52f9-4945-816f-216fae735aa8] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004294499s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-065310 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-7lz4t" [fb5e581a-5e9e-4b23-9416-e244ce5aa02f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003695516s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-065310 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-065310 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-065310 -n no-preload-065310
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-065310 -n no-preload-065310: exit status 2 (311.152938ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-065310 -n no-preload-065310
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-065310 -n no-preload-065310: exit status 2 (336.707602ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-065310 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-065310 -n no-preload-065310
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-065310 -n no-preload-065310
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.92s)
E1002 06:52:59.135577  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/old-k8s-version-865121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:52:59.142069  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/old-k8s-version-865121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:52:59.153622  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/old-k8s-version-865121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:52:59.175095  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/old-k8s-version-865121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:52:59.216885  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/old-k8s-version-865121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:52:59.298852  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/old-k8s-version-865121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:52:59.460951  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/old-k8s-version-865121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:52:59.782223  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/old-k8s-version-865121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:53:00.423764  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/old-k8s-version-865121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:53:01.705037  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/old-k8s-version-865121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 06:53:04.266619  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/old-k8s-version-865121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-271519 "pgrep -a kubelet"
I1002 06:51:48.853247  379278 config.go:182] Loaded profile config "kindnet-271519": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-271519 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-dzjbr" [0cd0b40f-f6a5-41b4-88e6-c9605a7619b2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-dzjbr" [0cd0b40f-f6a5-41b4-88e6-c9605a7619b2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004194245s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (73.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-271519 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-271519 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m13.684039782s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (73.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-271519 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-271519 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-271519 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-gtbx4" [68bf444f-5653-4218-9d29-e11083866de1] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00368321s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-271519 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (46.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-271519 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
I1002 06:52:19.328256  379278 config.go:182] Loaded profile config "calico-271519": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-271519 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (46.315866605s)
--- PASS: TestNetworkPlugins/group/flannel/Start (46.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-271519 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-dn2r6" [4b72aacf-7c8e-460d-9b58-1c902d7eb7eb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-dn2r6" [4b72aacf-7c8e-460d-9b58-1c902d7eb7eb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.003575075s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-271519 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-271519 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-271519 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-271519 "pgrep -a kubelet"
I1002 06:52:29.330772  379278 config.go:182] Loaded profile config "custom-flannel-271519": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-271519 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-w9t5h" [d07eedb4-86ff-4888-9df2-fab817905265] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-w9t5h" [d07eedb4-86ff-4888-9df2-fab817905265] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.004201314s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-271519 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-271519 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-271519 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (35.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-271519 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-271519 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (35.263348546s)
--- PASS: TestNetworkPlugins/group/bridge/Start (35.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-271519 "pgrep -a kubelet"
I1002 06:53:05.240460  379278 config.go:182] Loaded profile config "enable-default-cni-271519": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-271519 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-s68xp" [a50f5ca9-0f10-456c-bdb3-3fd83839ac15] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-s68xp" [a50f5ca9-0f10-456c-bdb3-3fd83839ac15] Running
E1002 06:53:09.388344  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/old-k8s-version-865121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.004007671s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-nrjbz" [eeb6bd54-3acd-4563-b1fa-91625dacd100] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00297304s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-271519 "pgrep -a kubelet"
I1002 06:53:11.720652  379278 config.go:182] Loaded profile config "flannel-271519": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-271519 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-56mq4" [4c0c5417-2f21-4939-b56a-8a6652ac55e3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1002 06:53:12.517606  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/addons-304257/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-56mq4" [4c0c5417-2f21-4939-b56a-8a6652ac55e3] Running
E1002 06:53:19.629647  379278 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/old-k8s-version-865121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003348379s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-271519 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-271519 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-271519 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-271519 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-271519 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-271519 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-271519 "pgrep -a kubelet"
I1002 06:53:25.022402  379278 config.go:182] Loaded profile config "bridge-271519": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-271519 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-kvc2k" [cbf0a929-ae1b-41e6-861f-909405322fa6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-kvc2k" [cbf0a929-ae1b-41e6-861f-909405322fa6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004460364s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-271519 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-271519 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-271519 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    

Test skip (25/331)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-575306" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-575306
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-271519 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-271519

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-271519

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-271519

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-271519

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-271519

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-271519

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-271519

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-271519

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-271519

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-271519

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-271519"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-271519"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-271519"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-271519

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-271519"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-271519"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-271519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-271519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-271519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-271519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-271519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-271519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-271519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-271519" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-271519"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-271519"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-271519"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-271519"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-271519"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-271519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-271519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-271519" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-271519"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-271519"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-271519"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-271519"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-271519"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21643-375701/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 02 Oct 2025 06:45:37 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-125086
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21643-375701/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 02 Oct 2025 06:46:17 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: pause-827783
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21643-375701/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 02 Oct 2025 06:46:22 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: running-upgrade-879201
contexts:
- context:
cluster: kubernetes-upgrade-125086
user: kubernetes-upgrade-125086
name: kubernetes-upgrade-125086
- context:
cluster: pause-827783
extensions:
- extension:
last-update: Thu, 02 Oct 2025 06:46:17 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-827783
name: pause-827783
- context:
cluster: running-upgrade-879201
user: running-upgrade-879201
name: running-upgrade-879201
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-125086
user:
client-certificate: /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/kubernetes-upgrade-125086/client.crt
client-key: /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/kubernetes-upgrade-125086/client.key
- name: pause-827783
user:
client-certificate: /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/pause-827783/client.crt
client-key: /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/pause-827783/client.key
- name: running-upgrade-879201
user:
client-certificate: /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/running-upgrade-879201/client.crt
client-key: /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/running-upgrade-879201/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-271519

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-271519"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-271519"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-271519"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-271519"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-271519"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-271519"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-271519"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-271519"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-271519"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-271519"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-271519"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-271519"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-271519"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-271519"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-271519"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-271519"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-271519"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-271519"

                                                
                                                
----------------------- debugLogs end: kubenet-271519 [took: 3.160864536s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-271519" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-271519
--- SKIP: TestNetworkPlugins/group/kubenet (3.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-271519 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-271519

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-271519

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-271519

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-271519

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-271519

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-271519

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-271519

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-271519

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-271519

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-271519

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271519"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271519"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271519"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-271519

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271519"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271519"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-271519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-271519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-271519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-271519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-271519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-271519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-271519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-271519" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271519"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271519"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271519"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271519"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271519"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-271519

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-271519

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-271519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-271519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-271519

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-271519

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-271519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-271519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-271519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-271519" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-271519" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271519"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271519"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271519"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271519"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271519"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21643-375701/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 02 Oct 2025 06:45:37 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-125086
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21643-375701/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 02 Oct 2025 06:46:35 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: pause-827783
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21643-375701/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 02 Oct 2025 06:46:38 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: running-upgrade-879201
contexts:
- context:
cluster: kubernetes-upgrade-125086
user: kubernetes-upgrade-125086
name: kubernetes-upgrade-125086
- context:
cluster: pause-827783
extensions:
- extension:
last-update: Thu, 02 Oct 2025 06:46:35 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-827783
name: pause-827783
- context:
cluster: running-upgrade-879201
extensions:
- extension:
last-update: Thu, 02 Oct 2025 06:46:38 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: running-upgrade-879201
name: running-upgrade-879201
current-context: running-upgrade-879201
kind: Config
users:
- name: kubernetes-upgrade-125086
user:
client-certificate: /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/kubernetes-upgrade-125086/client.crt
client-key: /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/kubernetes-upgrade-125086/client.key
- name: pause-827783
user:
client-certificate: /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/pause-827783/client.crt
client-key: /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/pause-827783/client.key
- name: running-upgrade-879201
user:
client-certificate: /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/running-upgrade-879201/client.crt
client-key: /home/jenkins/minikube-integration/21643-375701/.minikube/profiles/running-upgrade-879201/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-271519

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271519"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271519"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271519"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271519"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271519"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271519"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271519"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271519"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271519"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271519"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271519"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271519"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271519"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271519"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271519"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271519"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271519"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-271519" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271519"

                                                
                                                
----------------------- debugLogs end: cilium-271519 [took: 3.747298712s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-271519" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-271519
--- SKIP: TestNetworkPlugins/group/cilium (3.91s)

                                                
                                    
Copied to clipboard