Test Report: Docker_Linux_containerd 21894

                    
                      8496c1ca7722bf7d926446d0df8cf9af55d7419f:2025-11-15:42336
                    
                

Test fail (8/332)

x
+
TestFunctional/parallel/DashboardCmd (302.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-643455 --alsologtostderr -v=1]
E1115 09:29:54.694361  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/addons-868580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:30:35.655882  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/addons-868580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:31:57.577349  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/addons-868580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-643455 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-643455 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-643455 --alsologtostderr -v=1] stderr:
I1115 09:29:45.044041  175074 out.go:360] Setting OutFile to fd 1 ...
I1115 09:29:45.044326  175074 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:29:45.044339  175074 out.go:374] Setting ErrFile to fd 2...
I1115 09:29:45.044345  175074 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:29:45.044541  175074 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-124770/.minikube/bin
I1115 09:29:45.044879  175074 mustload.go:66] Loading cluster: functional-643455
I1115 09:29:45.045307  175074 config.go:182] Loaded profile config "functional-643455": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1115 09:29:45.045705  175074 cli_runner.go:164] Run: docker container inspect functional-643455 --format={{.State.Status}}
I1115 09:29:45.063631  175074 host.go:66] Checking if "functional-643455" exists ...
I1115 09:29:45.063905  175074 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1115 09:29:45.125189  175074 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-15 09:29:45.114902189 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1115 09:29:45.125360  175074 api_server.go:166] Checking apiserver status ...
I1115 09:29:45.125419  175074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1115 09:29:45.125467  175074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-643455
I1115 09:29:45.143415  175074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21894-124770/.minikube/machines/functional-643455/id_rsa Username:docker}
I1115 09:29:45.242030  175074 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5089/cgroup
W1115 09:29:45.250492  175074 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5089/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1115 09:29:45.250537  175074 ssh_runner.go:195] Run: ls
I1115 09:29:45.254408  175074 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I1115 09:29:45.258510  175074 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W1115 09:29:45.258553  175074 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1115 09:29:45.258694  175074 config.go:182] Loaded profile config "functional-643455": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1115 09:29:45.258704  175074 addons.go:70] Setting dashboard=true in profile "functional-643455"
I1115 09:29:45.258710  175074 addons.go:239] Setting addon dashboard=true in "functional-643455"
I1115 09:29:45.258733  175074 host.go:66] Checking if "functional-643455" exists ...
I1115 09:29:45.259032  175074 cli_runner.go:164] Run: docker container inspect functional-643455 --format={{.State.Status}}
I1115 09:29:45.279446  175074 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1115 09:29:45.280828  175074 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1115 09:29:45.282049  175074 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1115 09:29:45.282089  175074 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1115 09:29:45.282161  175074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-643455
I1115 09:29:45.300459  175074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21894-124770/.minikube/machines/functional-643455/id_rsa Username:docker}
I1115 09:29:45.401097  175074 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1115 09:29:45.401128  175074 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1115 09:29:45.414246  175074 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1115 09:29:45.414275  175074 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1115 09:29:45.427128  175074 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1115 09:29:45.427159  175074 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1115 09:29:45.440351  175074 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1115 09:29:45.440371  175074 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1115 09:29:45.453210  175074 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
I1115 09:29:45.453239  175074 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1115 09:29:45.466080  175074 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1115 09:29:45.466104  175074 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1115 09:29:45.478812  175074 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1115 09:29:45.478832  175074 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1115 09:29:45.491640  175074 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1115 09:29:45.491661  175074 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1115 09:29:45.504516  175074 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1115 09:29:45.504544  175074 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1115 09:29:45.517511  175074 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1115 09:29:45.971880  175074 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-643455 addons enable metrics-server

                                                
                                                
I1115 09:29:45.973034  175074 addons.go:202] Writing out "functional-643455" config to set dashboard=true...
W1115 09:29:45.973296  175074 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1115 09:29:45.973916  175074 kapi.go:59] client config for functional-643455: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21894-124770/.minikube/profiles/functional-643455/client.crt", KeyFile:"/home/jenkins/minikube-integration/21894-124770/.minikube/profiles/functional-643455/client.key", CAFile:"/home/jenkins/minikube-integration/21894-124770/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1115 09:29:45.974398  175074 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1115 09:29:45.974414  175074 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1115 09:29:45.974425  175074 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1115 09:29:45.974431  175074 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1115 09:29:45.974437  175074 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1115 09:29:45.981744  175074 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  91378dff-53bd-4511-a040-05bfcf8186f1 791 0 2025-11-15 09:29:45 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-11-15 09:29:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.110.187.89,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.110.187.89],IPFamilies:[IPv4],AllocateLoadBalance
rNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1115 09:29:45.981948  175074 out.go:285] * Launching proxy ...
* Launching proxy ...
I1115 09:29:45.982025  175074 dashboard.go:154] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-643455 proxy --port 36195]
I1115 09:29:45.982369  175074 dashboard.go:159] Waiting for kubectl to output host:port ...
I1115 09:29:46.024597  175074 dashboard.go:177] proxy stdout: Starting to serve on 127.0.0.1:36195
W1115 09:29:46.024665  175074 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I1115 09:29:46.032351  175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ce848d28-28d6-46ba-8206-b1ae8767e37d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:29:46 GMT]] Body:0xc0007d2f40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00028ab40 TLS:<nil>}
I1115 09:29:46.032460  175074 retry.go:31] will retry after 92.533µs: Temporary Error: unexpected response code: 503
I1115 09:29:46.037655  175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5a961632-7f1b-4df1-ae1a-be818160e9ed] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:29:46 GMT]] Body:0xc000aa9bc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0008b77c0 TLS:<nil>}
I1115 09:29:46.037730  175074 retry.go:31] will retry after 124.982µs: Temporary Error: unexpected response code: 503
I1115 09:29:46.041049  175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d5b748d0-fa30-430b-91f6-584421f84cc2] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:29:46 GMT]] Body:0xc000967600 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001cd040 TLS:<nil>}
I1115 09:29:46.041125  175074 retry.go:31] will retry after 153.888µs: Temporary Error: unexpected response code: 503
I1115 09:29:46.044208  175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5b45c509-219c-430e-992e-f4bedd51f585] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:29:46 GMT]] Body:0xc0009676c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00028af00 TLS:<nil>}
I1115 09:29:46.044266  175074 retry.go:31] will retry after 268.869µs: Temporary Error: unexpected response code: 503
I1115 09:29:46.047495  175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7b289d8e-cf4a-4cb1-a370-000b61fd85b4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:29:46 GMT]] Body:0xc0007d3080 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00028b040 TLS:<nil>}
I1115 09:29:46.047536  175074 retry.go:31] will retry after 289.853µs: Temporary Error: unexpected response code: 503
I1115 09:29:46.050643  175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d81701e5-5345-4a01-9ee2-a6697ea33e1e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:29:46 GMT]] Body:0xc0009677c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0008b7900 TLS:<nil>}
I1115 09:29:46.050681  175074 retry.go:31] will retry after 803.679µs: Temporary Error: unexpected response code: 503
I1115 09:29:46.053731  175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3b13f12a-c48d-487c-9a63-5fbfaa1bdf89] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:29:46 GMT]] Body:0xc000aa9cc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00028b180 TLS:<nil>}
I1115 09:29:46.053786  175074 retry.go:31] will retry after 1.164048ms: Temporary Error: unexpected response code: 503
I1115 09:29:46.057935  175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[dd6c1068-1a08-451e-92c3-c3ecf1c49982] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:29:46 GMT]] Body:0xc0009678c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001cd180 TLS:<nil>}
I1115 09:29:46.057978  175074 retry.go:31] will retry after 1.188976ms: Temporary Error: unexpected response code: 503
I1115 09:29:46.062172  175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b7dbb167-34f1-4c84-aa73-eb19fce99ce3] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:29:46 GMT]] Body:0xc0007d3180 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00028b2c0 TLS:<nil>}
I1115 09:29:46.062250  175074 retry.go:31] will retry after 1.458312ms: Temporary Error: unexpected response code: 503
I1115 09:29:46.066605  175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e7069d45-a177-4a7d-9145-a8a414282969] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:29:46 GMT]] Body:0xc0009679c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0008b7cc0 TLS:<nil>}
I1115 09:29:46.066654  175074 retry.go:31] will retry after 2.60561ms: Temporary Error: unexpected response code: 503
I1115 09:29:46.072109  175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2da1e842-521d-4637-9aff-22299e1101e7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:29:46 GMT]] Body:0xc0007d3240 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00028b400 TLS:<nil>}
I1115 09:29:46.072166  175074 retry.go:31] will retry after 4.109761ms: Temporary Error: unexpected response code: 503
I1115 09:29:46.079553  175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[61ab0f81-51cc-4f67-98c4-027954e76030] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:29:46 GMT]] Body:0xc000aa9f00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0008b7e00 TLS:<nil>}
I1115 09:29:46.079595  175074 retry.go:31] will retry after 8.805052ms: Temporary Error: unexpected response code: 503
I1115 09:29:46.091119  175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cbb9cd51-3faa-44d8-a0bc-3e77c2ad6c2b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:29:46 GMT]] Body:0xc000967a80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001cd2c0 TLS:<nil>}
I1115 09:29:46.091187  175074 retry.go:31] will retry after 14.222834ms: Temporary Error: unexpected response code: 503
I1115 09:29:46.109406  175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[00f2fa93-0374-477f-a44c-e7b407d4c47c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:29:46 GMT]] Body:0xc000967b00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00028b540 TLS:<nil>}
I1115 09:29:46.109470  175074 retry.go:31] will retry after 20.00421ms: Temporary Error: unexpected response code: 503
I1115 09:29:46.132358  175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[17288325-10de-43b5-9db2-f2d37e7983d2] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:29:46 GMT]] Body:0xc0007d3340 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00028b680 TLS:<nil>}
I1115 09:29:46.132430  175074 retry.go:31] will retry after 38.600885ms: Temporary Error: unexpected response code: 503
I1115 09:29:46.174566  175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f4d52d14-7c49-4785-b634-4b2a50eabe13] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:29:46 GMT]] Body:0xc000967c00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004e9680 TLS:<nil>}
I1115 09:29:46.174662  175074 retry.go:31] will retry after 64.48062ms: Temporary Error: unexpected response code: 503
I1115 09:29:46.242945  175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9615dd49-b746-4e20-8534-959a108682ef] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:29:46 GMT]] Body:0xc0007d3440 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00028b7c0 TLS:<nil>}
I1115 09:29:46.243012  175074 retry.go:31] will retry after 79.278043ms: Temporary Error: unexpected response code: 503
I1115 09:29:46.326564  175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[17f3e7f2-ab4c-4c60-b71f-3c5dd76bda1a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:29:46 GMT]] Body:0xc0008c2100 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004e97c0 TLS:<nil>}
I1115 09:29:46.326654  175074 retry.go:31] will retry after 146.9789ms: Temporary Error: unexpected response code: 503
I1115 09:29:46.477076  175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[76f87f82-2b6a-47ea-8ebe-56d3f1236c15] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:29:46 GMT]] Body:0xc0008c2180 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001cd400 TLS:<nil>}
I1115 09:29:46.477142  175074 retry.go:31] will retry after 111.988301ms: Temporary Error: unexpected response code: 503
I1115 09:29:46.593698  175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f29c714a-ab2e-4015-b025-1b3a6e09cf96] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:29:46 GMT]] Body:0xc0007d3580 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001cd540 TLS:<nil>}
I1115 09:29:46.593765  175074 retry.go:31] will retry after 302.242022ms: Temporary Error: unexpected response code: 503
I1115 09:29:46.899413  175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2e57060b-765b-4a2b-889e-57fad33862d4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:29:46 GMT]] Body:0xc000967d40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004e9900 TLS:<nil>}
I1115 09:29:46.899480  175074 retry.go:31] will retry after 256.959249ms: Temporary Error: unexpected response code: 503
I1115 09:29:47.160022  175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9c58d409-9e7a-41d0-be1c-671dce00cf9c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:29:47 GMT]] Body:0xc0007d3680 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00028b900 TLS:<nil>}
I1115 09:29:47.160114  175074 retry.go:31] will retry after 350.125562ms: Temporary Error: unexpected response code: 503
I1115 09:29:47.513744  175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b356abf3-bfc8-4503-8f90-eea4fffacc09] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:29:47 GMT]] Body:0xc0007d3740 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004e9a40 TLS:<nil>}
I1115 09:29:47.513819  175074 retry.go:31] will retry after 1.085866206s: Temporary Error: unexpected response code: 503
I1115 09:29:48.602962  175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[084129e7-57d8-4c61-a9c4-217a9c98b6f1] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:29:48 GMT]] Body:0xc0008c2280 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004e9b80 TLS:<nil>}
I1115 09:29:48.603028  175074 retry.go:31] will retry after 1.263396499s: Temporary Error: unexpected response code: 503
I1115 09:29:49.870521  175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[edd3a503-2e07-45b7-9b63-e56d0b996c42] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:29:49 GMT]] Body:0xc0007d3840 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001cd680 TLS:<nil>}
I1115 09:29:49.870604  175074 retry.go:31] will retry after 1.05300319s: Temporary Error: unexpected response code: 503
I1115 09:29:50.926985  175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7dd78533-31b4-4465-a690-50de1b318898] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:29:50 GMT]] Body:0xc000967e80 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004e9cc0 TLS:<nil>}
I1115 09:29:50.927049  175074 retry.go:31] will retry after 3.310876895s: Temporary Error: unexpected response code: 503
I1115 09:29:54.243630  175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0650ec5b-e5b4-4e83-8740-b628a6e3aa4b] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:29:54 GMT]] Body:0xc0007d3940 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00028bb80 TLS:<nil>}
I1115 09:29:54.243697  175074 retry.go:31] will retry after 2.501745474s: Temporary Error: unexpected response code: 503
I1115 09:29:56.749134  175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[fd609a6b-6cb0-4718-ad4d-703a8cf13660] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:29:56 GMT]] Body:0xc0008c2380 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00028bcc0 TLS:<nil>}
I1115 09:29:56.749196  175074 retry.go:31] will retry after 5.182712673s: Temporary Error: unexpected response code: 503
I1115 09:30:01.936734  175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[68ffae2a-b995-4735-8830-014b4cc33c0a] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:30:01 GMT]] Body:0xc0008fe980 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001cd7c0 TLS:<nil>}
I1115 09:30:01.936806  175074 retry.go:31] will retry after 10.79189693s: Temporary Error: unexpected response code: 503
I1115 09:30:12.733127  175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[102f7269-032b-42cf-b7b9-bb0467e2e9dc] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:30:12 GMT]] Body:0xc0008c2480 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003a0000 TLS:<nil>}
I1115 09:30:12.733204  175074 retry.go:31] will retry after 15.965097719s: Temporary Error: unexpected response code: 503
I1115 09:30:28.704391  175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3b47b2cd-8591-4fa8-b088-629956bafbd0] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:30:28 GMT]] Body:0xc0008c2500 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003a0140 TLS:<nil>}
I1115 09:30:28.704467  175074 retry.go:31] will retry after 17.465830656s: Temporary Error: unexpected response code: 503
I1115 09:30:46.173985  175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[87340e33-43d6-41ca-b133-755dd72e2ac3] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:30:46 GMT]] Body:0xc0007d3b00 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001cd900 TLS:<nil>}
I1115 09:30:46.174070  175074 retry.go:31] will retry after 36.160197642s: Temporary Error: unexpected response code: 503
I1115 09:31:22.338624  175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[04904562-2cbd-42ea-bd95-454fc19beec4] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:31:22 GMT]] Body:0xc0007d3bc0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003a0280 TLS:<nil>}
I1115 09:31:22.338693  175074 retry.go:31] will retry after 59.29160483s: Temporary Error: unexpected response code: 503
I1115 09:32:21.634169  175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9479e8c6-2a13-406d-a3e2-7f88c69e05fc] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:32:21 GMT]] Body:0xc0008c20c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003a03c0 TLS:<nil>}
I1115 09:32:21.634248  175074 retry.go:31] will retry after 1m8.35287164s: Temporary Error: unexpected response code: 503
I1115 09:33:29.993756  175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[eab7ddfc-354e-45ab-9f36-878aa038a653] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:33:29 GMT]] Body:0xc0007d2ac0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003a0500 TLS:<nil>}
I1115 09:33:29.993826  175074 retry.go:31] will retry after 41.862939328s: Temporary Error: unexpected response code: 503
I1115 09:34:11.860353  175074 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[40097749-5499-46b5-bf11-e6683d7b0e15] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 15 Nov 2025 09:34:11 GMT]] Body:0xc0008c20c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003a0640 TLS:<nil>}
I1115 09:34:11.860433  175074 retry.go:31] will retry after 1m21.415223078s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-643455
helpers_test.go:243: (dbg) docker inspect functional-643455:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "75d4c555182ef259c4fe3cf0e40dc50aaa963f9faa0a719174698fca1b7fbe0f",
	        "Created": "2025-11-15T09:27:38.460289529Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 158671,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T09:27:38.49284776Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/75d4c555182ef259c4fe3cf0e40dc50aaa963f9faa0a719174698fca1b7fbe0f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/75d4c555182ef259c4fe3cf0e40dc50aaa963f9faa0a719174698fca1b7fbe0f/hostname",
	        "HostsPath": "/var/lib/docker/containers/75d4c555182ef259c4fe3cf0e40dc50aaa963f9faa0a719174698fca1b7fbe0f/hosts",
	        "LogPath": "/var/lib/docker/containers/75d4c555182ef259c4fe3cf0e40dc50aaa963f9faa0a719174698fca1b7fbe0f/75d4c555182ef259c4fe3cf0e40dc50aaa963f9faa0a719174698fca1b7fbe0f-json.log",
	        "Name": "/functional-643455",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-643455:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-643455",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "75d4c555182ef259c4fe3cf0e40dc50aaa963f9faa0a719174698fca1b7fbe0f",
	                "LowerDir": "/var/lib/docker/overlay2/b6fb531e75d0eea8076d7f643cf1d8c98b7ecbdafda46cdb359559dfe5e18da2-init/diff:/var/lib/docker/overlay2/dd55a3984a0401bbe9c47729dc0fec07395bf4daab8d10377766fb7a6cf0f6d2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b6fb531e75d0eea8076d7f643cf1d8c98b7ecbdafda46cdb359559dfe5e18da2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b6fb531e75d0eea8076d7f643cf1d8c98b7ecbdafda46cdb359559dfe5e18da2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b6fb531e75d0eea8076d7f643cf1d8c98b7ecbdafda46cdb359559dfe5e18da2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-643455",
	                "Source": "/var/lib/docker/volumes/functional-643455/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-643455",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-643455",
	                "name.minikube.sigs.k8s.io": "functional-643455",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "029c034dcecc64b8ccca91cb8f52a0ca277442aca7cd6409ecdd0fb513d4f17f",
	            "SandboxKey": "/var/run/docker/netns/029c034dcecc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-643455": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "be24d09662bb1f50ee771e52c11387b4f471476e50e89b32b3a29bd33fc96223",
	                    "EndpointID": "2c39da97daa59f3d6450a6acb87027688136e17fb9118a11649286155d98bd18",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "d6:72:d6:b1:0e:d6",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-643455",
	                        "75d4c555182e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-643455 -n functional-643455
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-643455 logs -n 25: (1.247713372s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                               ARGS                                                                │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh       │ functional-643455 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │ 15 Nov 25 09:29 UTC │
	│ ssh       │ functional-643455 ssh -- ls -la /mount-9p                                                                                         │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │ 15 Nov 25 09:29 UTC │
	│ ssh       │ functional-643455 ssh cat /mount-9p/test-1763198973779415877                                                                      │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │ 15 Nov 25 09:29 UTC │
	│ ssh       │ functional-643455 ssh stat /mount-9p/created-by-test                                                                              │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │ 15 Nov 25 09:29 UTC │
	│ ssh       │ functional-643455 ssh stat /mount-9p/created-by-pod                                                                               │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │ 15 Nov 25 09:29 UTC │
	│ ssh       │ functional-643455 ssh sudo umount -f /mount-9p                                                                                    │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │ 15 Nov 25 09:29 UTC │
	│ ssh       │ functional-643455 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │                     │
	│ mount     │ -p functional-643455 /tmp/TestFunctionalparallelMountCmdspecific-port3528019422/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │                     │
	│ ssh       │ functional-643455 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │ 15 Nov 25 09:29 UTC │
	│ ssh       │ functional-643455 ssh -- ls -la /mount-9p                                                                                         │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │ 15 Nov 25 09:29 UTC │
	│ ssh       │ functional-643455 ssh sudo umount -f /mount-9p                                                                                    │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │                     │
	│ mount     │ -p functional-643455 /tmp/TestFunctionalparallelMountCmdVerifyCleanup751928891/001:/mount3 --alsologtostderr -v=1                 │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │                     │
	│ mount     │ -p functional-643455 /tmp/TestFunctionalparallelMountCmdVerifyCleanup751928891/001:/mount2 --alsologtostderr -v=1                 │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │                     │
	│ ssh       │ functional-643455 ssh findmnt -T /mount1                                                                                          │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │                     │
	│ mount     │ -p functional-643455 /tmp/TestFunctionalparallelMountCmdVerifyCleanup751928891/001:/mount1 --alsologtostderr -v=1                 │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │                     │
	│ ssh       │ functional-643455 ssh findmnt -T /mount1                                                                                          │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │ 15 Nov 25 09:29 UTC │
	│ ssh       │ functional-643455 ssh findmnt -T /mount2                                                                                          │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │ 15 Nov 25 09:29 UTC │
	│ ssh       │ functional-643455 ssh findmnt -T /mount3                                                                                          │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │ 15 Nov 25 09:29 UTC │
	│ mount     │ -p functional-643455 --kill=true                                                                                                  │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │                     │
	│ addons    │ functional-643455 addons list                                                                                                     │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │ 15 Nov 25 09:29 UTC │
	│ addons    │ functional-643455 addons list -o json                                                                                             │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │ 15 Nov 25 09:29 UTC │
	│ start     │ -p functional-643455 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd                   │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │                     │
	│ start     │ -p functional-643455 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                             │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │                     │
	│ start     │ -p functional-643455 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd                   │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-643455 --alsologtostderr -v=1                                                                    │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │                     │
	└───────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 09:29:44
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 09:29:44.875838  174988 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:29:44.875936  174988 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:29:44.875944  174988 out.go:374] Setting ErrFile to fd 2...
	I1115 09:29:44.875957  174988 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:29:44.876325  174988 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-124770/.minikube/bin
	I1115 09:29:44.876748  174988 out.go:368] Setting JSON to false
	I1115 09:29:44.877812  174988 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":15135,"bootTime":1763183850,"procs":245,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 09:29:44.877921  174988 start.go:143] virtualization: kvm guest
	I1115 09:29:44.880159  174988 out.go:179] * [functional-643455] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1115 09:29:44.881646  174988 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 09:29:44.881678  174988 notify.go:221] Checking for updates...
	I1115 09:29:44.884009  174988 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 09:29:44.885173  174988 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-124770/kubeconfig
	I1115 09:29:44.886339  174988 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-124770/.minikube
	I1115 09:29:44.887594  174988 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 09:29:44.888818  174988 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 09:29:44.890443  174988 config.go:182] Loaded profile config "functional-643455": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1115 09:29:44.890911  174988 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 09:29:44.915414  174988 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
	I1115 09:29:44.915506  174988 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:29:44.974874  174988 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-15 09:29:44.965206839 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 09:29:44.974982  174988 docker.go:319] overlay module found
	I1115 09:29:44.976788  174988 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1115 09:29:44.978139  174988 start.go:309] selected driver: docker
	I1115 09:29:44.978155  174988 start.go:930] validating driver "docker" against &{Name:functional-643455 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-643455 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:29:44.978254  174988 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 09:29:44.980009  174988 out.go:203] 
	W1115 09:29:44.981297  174988 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1115 09:29:44.982533  174988 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	2cda70e5609c4       56cc512116c8f       5 minutes ago       Exited              mount-munger              0                   b835aa6cdf85b       busybox-mount                               default
	17a282b1fa4c9       5107333e08a87       5 minutes ago       Running             mysql                     0                   bb3d9dcf11838       mysql-5bb876957f-5bd4x                      default
	bf75b4fead77d       9056ab77afb8e       5 minutes ago       Running             echo-server               0                   0a593cd9578a6       hello-node-connect-7d85dfc575-q2qtv         default
	564d4fabc270f       6e38f40d628db       5 minutes ago       Running             storage-provisioner       2                   7280b209c4a1e       storage-provisioner                         kube-system
	59b2e611066bb       c80c8dbafe7dd       5 minutes ago       Running             kube-controller-manager   2                   488ecac322c4f       kube-controller-manager-functional-643455   kube-system
	babc27772525c       c3994bc696102       5 minutes ago       Running             kube-apiserver            0                   26840f129c94e       kube-apiserver-functional-643455            kube-system
	20e7221441e30       5f1f5298c888d       5 minutes ago       Running             etcd                      1                   f97b5bca4f6a7       etcd-functional-643455                      kube-system
	fb18f66b9833e       409467f978b4a       6 minutes ago       Running             kindnet-cni               1                   bfebc994070f2       kindnet-9ck6k                               kube-system
	0eea2114b1571       fc25172553d79       6 minutes ago       Running             kube-proxy                1                   6075525d36525       kube-proxy-nwjjp                            kube-system
	cc9efc6fc9059       c80c8dbafe7dd       6 minutes ago       Exited              kube-controller-manager   1                   488ecac322c4f       kube-controller-manager-functional-643455   kube-system
	4e1710787e24e       7dd6aaa1717ab       6 minutes ago       Running             kube-scheduler            1                   558179c3009ad       kube-scheduler-functional-643455            kube-system
	dd1dfa9b2e913       6e38f40d628db       6 minutes ago       Exited              storage-provisioner       1                   7280b209c4a1e       storage-provisioner                         kube-system
	ec35552550ecd       52546a367cc9e       6 minutes ago       Running             coredns                   1                   d3841618a2a6f       coredns-66bc5c9577-gslgg                    kube-system
	71224bce65213       52546a367cc9e       6 minutes ago       Exited              coredns                   0                   d3841618a2a6f       coredns-66bc5c9577-gslgg                    kube-system
	4c8deb830b3c4       409467f978b4a       6 minutes ago       Exited              kindnet-cni               0                   bfebc994070f2       kindnet-9ck6k                               kube-system
	12df49b0bdbf1       fc25172553d79       6 minutes ago       Exited              kube-proxy                0                   6075525d36525       kube-proxy-nwjjp                            kube-system
	81fd6e38ed44f       7dd6aaa1717ab       6 minutes ago       Exited              kube-scheduler            0                   558179c3009ad       kube-scheduler-functional-643455            kube-system
	dc71f2be2fc35       5f1f5298c888d       6 minutes ago       Exited              etcd                      0                   f97b5bca4f6a7       etcd-functional-643455                      kube-system
	
	
	==> containerd <==
	Nov 15 09:32:27 functional-643455 containerd[3858]: time="2025-11-15T09:32:27.122746937Z" level=info msg="PullImage \"docker.io/nginx:latest\""
	Nov 15 09:32:27 functional-643455 containerd[3858]: time="2025-11-15T09:32:27.124921553Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Nov 15 09:32:27 functional-643455 containerd[3858]: time="2025-11-15T09:32:27.197821548Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Nov 15 09:32:27 functional-643455 containerd[3858]: time="2025-11-15T09:32:27.276792155Z" level=error msg="PullImage \"docker.io/nginx:latest\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:1beed3ca46acebe9d3fb62e9067f03d05d5bfa97a00f30938a0a3580563272ad: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 15 09:32:27 functional-643455 containerd[3858]: time="2025-11-15T09:32:27.276866159Z" level=info msg="stop pulling image docker.io/library/nginx:latest: active requests=0, bytes read=10967"
	Nov 15 09:32:27 functional-643455 containerd[3858]: time="2025-11-15T09:32:27.277552401Z" level=info msg="PullImage \"docker.io/nginx:alpine\""
	Nov 15 09:32:27 functional-643455 containerd[3858]: time="2025-11-15T09:32:27.278929652Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Nov 15 09:32:27 functional-643455 containerd[3858]: time="2025-11-15T09:32:27.334329706Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Nov 15 09:32:27 functional-643455 containerd[3858]: time="2025-11-15T09:32:27.417802844Z" level=error msg="PullImage \"docker.io/nginx:alpine\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 15 09:32:27 functional-643455 containerd[3858]: time="2025-11-15T09:32:27.417909871Z" level=info msg="stop pulling image docker.io/library/nginx:alpine: active requests=0, bytes read=10967"
	Nov 15 09:32:31 functional-643455 containerd[3858]: time="2025-11-15T09:32:31.122293366Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Nov 15 09:32:31 functional-643455 containerd[3858]: time="2025-11-15T09:32:31.123962933Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Nov 15 09:32:31 functional-643455 containerd[3858]: time="2025-11-15T09:32:31.181545281Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Nov 15 09:32:31 functional-643455 containerd[3858]: time="2025-11-15T09:32:31.263701819Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 15 09:32:31 functional-643455 containerd[3858]: time="2025-11-15T09:32:31.263788620Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11044"
	Nov 15 09:32:34 functional-643455 containerd[3858]: time="2025-11-15T09:32:34.121440311Z" level=info msg="PullImage \"kicbase/echo-server:latest\""
	Nov 15 09:32:34 functional-643455 containerd[3858]: time="2025-11-15T09:32:34.123331179Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Nov 15 09:32:34 functional-643455 containerd[3858]: time="2025-11-15T09:32:34.208225182Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Nov 15 09:32:34 functional-643455 containerd[3858]: time="2025-11-15T09:32:34.292035532Z" level=error msg="PullImage \"kicbase/echo-server:latest\" failed" error="failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 15 09:32:34 functional-643455 containerd[3858]: time="2025-11-15T09:32:34.292089178Z" level=info msg="stop pulling image docker.io/kicbase/echo-server:latest: active requests=0, bytes read=10998"
	Nov 15 09:32:40 functional-643455 containerd[3858]: time="2025-11-15T09:32:40.121956160Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Nov 15 09:32:40 functional-643455 containerd[3858]: time="2025-11-15T09:32:40.123910312Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Nov 15 09:32:40 functional-643455 containerd[3858]: time="2025-11-15T09:32:40.205297259Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Nov 15 09:32:40 functional-643455 containerd[3858]: time="2025-11-15T09:32:40.287918113Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 15 09:32:40 functional-643455 containerd[3858]: time="2025-11-15T09:32:40.287975556Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11015"
	
	
	==> coredns [71224bce65213776a3058b9d9b685001f8515f08b6b57cb996061ae7af3d144b] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42898 - 36145 "HINFO IN 8984940392331241906.8485209873416469064. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.045622994s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ec35552550ecdcc3b355ec8adfa48f77638c99f67e22e267a5a5312cda6d6e69] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38627 - 823 "HINFO IN 155197805458491775.8394333180523951329. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.088504316s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               functional-643455
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-643455
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=functional-643455
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T09_27_54_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 09:27:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-643455
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 09:34:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 09:33:25 +0000   Sat, 15 Nov 2025 09:27:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 09:33:25 +0000   Sat, 15 Nov 2025 09:27:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 09:33:25 +0000   Sat, 15 Nov 2025 09:27:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 09:33:25 +0000   Sat, 15 Nov 2025 09:28:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-643455
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                9f8a9454-d3ff-4e20-a36e-cf2efe1bcbc9
	  Boot ID:                    fbc9987d-de80-43b3-8f69-13458401c4dd
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.28
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-sx2nl                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m24s
	  default                     hello-node-connect-7d85dfc575-q2qtv           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m33s
	  default                     mysql-5bb876957f-5bd4x                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     5m32s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m26s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m24s
	  kube-system                 coredns-66bc5c9577-gslgg                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     6m47s
	  kube-system                 etcd-functional-643455                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         6m53s
	  kube-system                 kindnet-9ck6k                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      6m47s
	  kube-system                 kube-apiserver-functional-643455              250m (3%)     0 (0%)      0 (0%)           0 (0%)         5m56s
	  kube-system                 kube-controller-manager-functional-643455     200m (2%)     0 (0%)      0 (0%)           0 (0%)         6m53s
	  kube-system                 kube-proxy-nwjjp                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m47s
	  kube-system                 kube-scheduler-functional-643455              100m (1%)     0 (0%)      0 (0%)           0 (0%)         6m53s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m47s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-gcsp4    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-gq4vv         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m46s                  kube-proxy       
	  Normal  Starting                 5m50s                  kube-proxy       
	  Normal  NodeHasSufficientPID     6m53s                  kubelet          Node functional-643455 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m53s                  kubelet          Node functional-643455 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m53s                  kubelet          Node functional-643455 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m53s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           6m48s                  node-controller  Node functional-643455 event: Registered Node functional-643455 in Controller
	  Normal  NodeReady                6m36s                  kubelet          Node functional-643455 status is now: NodeReady
	  Normal  Starting                 5m59s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m59s (x8 over 5m59s)  kubelet          Node functional-643455 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m59s (x8 over 5m59s)  kubelet          Node functional-643455 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m59s (x7 over 5m59s)  kubelet          Node functional-643455 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m59s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m55s                  node-controller  Node functional-643455 event: Registered Node functional-643455 in Controller
	
	
	==> dmesg <==
	
	
	==> etcd [20e7221441e30ddff73a233e0fb39c7859b8cdcf308f699b3da6d4ea14757f97] <==
	{"level":"warn","ts":"2025-11-15T09:28:48.547207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:28:48.555891Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:28:48.561820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:28:48.568579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:28:48.574820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:28:48.580802Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:28:48.587893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:28:48.593831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:28:48.601012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:28:48.617260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:28:48.623700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:28:48.629882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:28:48.637301Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:28:48.643481Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:28:48.649905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:28:48.656858Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:28:48.663547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:28:48.669659Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:28:48.676825Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:28:48.682971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:28:48.689137Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:28:48.707604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:28:48.714647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:28:48.721973Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:28:48.772471Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50872","server-name":"","error":"EOF"}
	
	
	==> etcd [dc71f2be2fc35f940d08e52670de1d4a1226f5ed51724f2c27632ec3469c374d] <==
	{"level":"warn","ts":"2025-11-15T09:27:50.760606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:27:50.767815Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:27:50.773822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:27:50.794096Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:27:50.801683Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:27:50.810153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:27:50.859079Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35488","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-15T09:28:45.349803Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-15T09:28:45.349978Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-643455","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-11-15T09:28:45.350141Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-15T09:28:45.351775Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-15T09:28:45.353125Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-15T09:28:45.353180Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-11-15T09:28:45.353221Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-11-15T09:28:45.353283Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-15T09:28:45.353293Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-11-15T09:28:45.353298Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-15T09:28:45.353318Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-15T09:28:45.353243Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-15T09:28:45.353342Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-15T09:28:45.353354Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-15T09:28:45.355660Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-15T09:28:45.355740Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-15T09:28:45.355771Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-15T09:28:45.355789Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-643455","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 09:34:46 up  4:17,  0 user,  load average: 0.12, 0.74, 1.66
	Linux functional-643455 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4c8deb830b3c40c4c2e7460472b20a8a34868b3c6ed2a2b28e8e2eb708d19b1e] <==
	I1115 09:28:00.290015       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 09:28:00.290318       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1115 09:28:00.290463       1 main.go:148] setting mtu 1500 for CNI 
	I1115 09:28:00.290482       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 09:28:00.290506       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T09:28:00Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 09:28:00.492641       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 09:28:00.492730       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 09:28:00.492745       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 09:28:00.493004       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1115 09:28:00.885572       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 09:28:00.885610       1 metrics.go:72] Registering metrics
	I1115 09:28:00.885711       1 controller.go:711] "Syncing nftables rules"
	I1115 09:28:10.494295       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:28:10.494381       1 main.go:301] handling current node
	I1115 09:28:20.498719       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:28:20.498760       1 main.go:301] handling current node
	I1115 09:28:30.497908       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:28:30.497939       1 main.go:301] handling current node
	
	
	==> kindnet [fb18f66b9833e3dde538053ed3f57dd6dfdb05cb5a04a8703272118f19fe0bd1] <==
	I1115 09:32:46.191198       1 main.go:301] handling current node
	I1115 09:32:56.191768       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:32:56.191799       1 main.go:301] handling current node
	I1115 09:33:06.194092       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:33:06.194129       1 main.go:301] handling current node
	I1115 09:33:16.194770       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:33:16.194812       1 main.go:301] handling current node
	I1115 09:33:26.191188       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:33:26.191228       1 main.go:301] handling current node
	I1115 09:33:36.195146       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:33:36.195182       1 main.go:301] handling current node
	I1115 09:33:46.199914       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:33:46.199951       1 main.go:301] handling current node
	I1115 09:33:56.191660       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:33:56.191707       1 main.go:301] handling current node
	I1115 09:34:06.199686       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:34:06.199727       1 main.go:301] handling current node
	I1115 09:34:16.198199       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:34:16.198236       1 main.go:301] handling current node
	I1115 09:34:26.191572       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:34:26.191604       1 main.go:301] handling current node
	I1115 09:34:36.193216       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:34:36.193255       1 main.go:301] handling current node
	I1115 09:34:46.197401       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:34:46.197436       1 main.go:301] handling current node
	
	
	==> kube-apiserver [babc27772525c961baf898d5c14615a30cff6db31c2bcaed456c0b27dbbaeeb8] <==
	I1115 09:28:49.239447       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1115 09:28:49.262732       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 09:28:50.141810       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 09:28:50.268508       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 09:28:50.268508       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 09:28:50.268508       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W1115 09:28:50.443490       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1115 09:28:50.444740       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 09:28:50.449478       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 09:28:50.987214       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1115 09:28:51.077837       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1115 09:28:51.125716       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 09:28:51.137090       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 09:28:56.155591       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1115 09:29:08.941026       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.100.120.65"}
	I1115 09:29:13.445263       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.106.91.133"}
	I1115 09:29:14.778145       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.99.53.42"}
	I1115 09:29:20.827479       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.97.14.196"}
	I1115 09:29:22.112518       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.96.127.55"}
	E1115 09:29:28.953220       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:39926: use of closed network connection
	E1115 09:29:30.359901       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:39944: use of closed network connection
	E1115 09:29:32.570122       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:39958: use of closed network connection
	I1115 09:29:45.819201       1 controller.go:667] quota admission added evaluator for: namespaces
	I1115 09:29:45.953077       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.187.89"}
	I1115 09:29:45.964682       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.163.26"}
	
	
	==> kube-controller-manager [59b2e611066bb26cf54b4e22ea3bff8df16074d96a0d58f8bca35318b1d8397e] <==
	I1115 09:28:51.985933       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1115 09:28:51.985979       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1115 09:28:51.986098       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1115 09:28:51.986413       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1115 09:28:51.986426       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1115 09:28:51.986415       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1115 09:28:51.986509       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1115 09:28:51.987007       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1115 09:28:51.987074       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1115 09:28:51.987524       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1115 09:28:51.988575       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1115 09:28:51.991276       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 09:28:51.994197       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1115 09:28:51.994259       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1115 09:28:51.994301       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1115 09:28:51.994309       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1115 09:28:51.994315       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1115 09:28:52.002224       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 09:28:52.012283       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1115 09:29:45.867793       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1115 09:29:45.871537       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1115 09:29:45.874791       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1115 09:29:45.876115       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1115 09:29:45.878691       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1115 09:29:45.884312       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [cc9efc6fc9059c7ecb39bcd62cb964c9b28d22237804da685ab2e20045fee203] <==
	I1115 09:28:36.434971       1 serving.go:386] Generated self-signed cert in-memory
	I1115 09:28:37.179644       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1115 09:28:37.179668       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 09:28:37.181112       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1115 09:28:37.181116       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1115 09:28:37.181432       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1115 09:28:37.181459       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1115 09:28:47.182963       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-proxy [0eea2114b1571ce0a888ea43435cf1aaf3f9357fdb10b1195e8c51c681f176e2] <==
	I1115 09:28:35.926903       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1115 09:28:35.927913       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-643455&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1115 09:28:36.929125       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-643455&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1115 09:28:38.805776       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-643455&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1115 09:28:44.306372       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-643455&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1115 09:28:55.327547       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 09:28:55.327588       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1115 09:28:55.327664       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 09:28:55.349670       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 09:28:55.349735       1 server_linux.go:132] "Using iptables Proxier"
	I1115 09:28:55.355434       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 09:28:55.355930       1 server.go:527] "Version info" version="v1.34.1"
	I1115 09:28:55.355962       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 09:28:55.358227       1 config.go:200] "Starting service config controller"
	I1115 09:28:55.358308       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 09:28:55.358373       1 config.go:106] "Starting endpoint slice config controller"
	I1115 09:28:55.358380       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 09:28:55.358255       1 config.go:309] "Starting node config controller"
	I1115 09:28:55.358404       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 09:28:55.358411       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 09:28:55.358694       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 09:28:55.358709       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 09:28:55.458500       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 09:28:55.460010       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 09:28:55.460039       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [12df49b0bdbf13f0052ec752866e2308cdebef7eb02aa3c3f90bad04188baeb6] <==
	I1115 09:27:59.895569       1 server_linux.go:53] "Using iptables proxy"
	I1115 09:27:59.967964       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 09:28:00.068553       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 09:28:00.068616       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1115 09:28:00.068733       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 09:28:00.089484       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 09:28:00.089545       1 server_linux.go:132] "Using iptables Proxier"
	I1115 09:28:00.094834       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 09:28:00.095367       1 server.go:527] "Version info" version="v1.34.1"
	I1115 09:28:00.095411       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 09:28:00.097116       1 config.go:200] "Starting service config controller"
	I1115 09:28:00.097147       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 09:28:00.097187       1 config.go:106] "Starting endpoint slice config controller"
	I1115 09:28:00.097193       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 09:28:00.097216       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 09:28:00.097304       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 09:28:00.097364       1 config.go:309] "Starting node config controller"
	I1115 09:28:00.097375       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 09:28:00.097382       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 09:28:00.197347       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1115 09:28:00.197347       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 09:28:00.197618       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [4e1710787e24ed725efd1baf8185da908cde84b9609641698e1063153aac9e5e] <==
	E1115 09:28:41.170473       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 09:28:41.178915       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1115 09:28:41.351173       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1115 09:28:41.407919       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1115 09:28:41.450528       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1115 09:28:43.572781       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1115 09:28:44.247390       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1115 09:28:44.491689       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1115 09:28:44.567545       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1115 09:28:44.840315       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1115 09:28:44.890860       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1115 09:28:44.891434       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1115 09:28:45.313304       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1115 09:28:45.522511       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1115 09:28:45.726307       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1115 09:28:45.758983       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1115 09:28:46.185391       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 09:28:46.200407       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1115 09:28:46.213130       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1115 09:28:46.246730       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1115 09:28:46.326995       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1115 09:28:46.369849       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1115 09:28:46.787615       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1115 09:28:47.489187       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1115 09:28:57.693703       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [81fd6e38ed44f24f83e30b9f760f68608a59e45ddfa53d48e689f61dc83a06fb] <==
	E1115 09:27:51.281612       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1115 09:27:51.281671       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1115 09:27:51.281741       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 09:27:51.281807       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1115 09:27:51.281864       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1115 09:27:51.285219       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1115 09:27:51.285423       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1115 09:27:52.086790       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1115 09:27:52.166397       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1115 09:27:52.204946       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1115 09:27:52.217198       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1115 09:27:52.244369       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1115 09:27:52.247394       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 09:27:52.316405       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1115 09:27:52.372527       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1115 09:27:52.468910       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 09:27:52.492229       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1115 09:27:52.632513       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1115 09:27:55.375996       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 09:28:35.165221       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1115 09:28:35.165251       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 09:28:35.165269       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1115 09:28:35.165363       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1115 09:28:35.165459       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1115 09:28:35.165486       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Nov 15 09:33:41 functional-643455 kubelet[4901]: E1115 09:33:41.121594    4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-sx2nl" podUID="7ca07bab-7255-4c58-9def-d033a33120e9"
	Nov 15 09:33:44 functional-643455 kubelet[4901]: E1115 09:33:44.122349    4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="d9932061-6756-48e8-bb60-59001527b050"
	Nov 15 09:33:44 functional-643455 kubelet[4901]: E1115 09:33:44.122361    4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-gcsp4" podUID="bc09e4d8-f970-4eec-83b3-5662
106ad81f"
	Nov 15 09:33:47 functional-643455 kubelet[4901]: E1115 09:33:47.121557    4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:1beed3ca46acebe9d3fb62e9067f03d05d5bfa97a00f30938a0a3580563272ad: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="2393f4a7-5ffe-4821-99e3-ea6552a163f7"
	Nov 15 09:33:49 functional-643455 kubelet[4901]: E1115 09:33:49.122353    4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-gq4vv" podUID="4e78954f-1256-46f8-8490-8d686648cde6"
	Nov 15 09:33:54 functional-643455 kubelet[4901]: E1115 09:33:54.121339    4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-sx2nl" podUID="7ca07bab-7255-4c58-9def-d033a33120e9"
	Nov 15 09:33:55 functional-643455 kubelet[4901]: E1115 09:33:55.121941    4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="d9932061-6756-48e8-bb60-59001527b050"
	Nov 15 09:33:58 functional-643455 kubelet[4901]: E1115 09:33:58.121810    4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-gcsp4" podUID="bc09e4d8-f970-4eec-83b3-5662
106ad81f"
	Nov 15 09:34:00 functional-643455 kubelet[4901]: E1115 09:34:00.122380    4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-gq4vv" podUID="4e78954f-1256-46f8-8490-8d686648cde6"
	Nov 15 09:34:01 functional-643455 kubelet[4901]: E1115 09:34:01.121726    4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:1beed3ca46acebe9d3fb62e9067f03d05d5bfa97a00f30938a0a3580563272ad: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="2393f4a7-5ffe-4821-99e3-ea6552a163f7"
	Nov 15 09:34:08 functional-643455 kubelet[4901]: E1115 09:34:08.121426    4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-sx2nl" podUID="7ca07bab-7255-4c58-9def-d033a33120e9"
	Nov 15 09:34:08 functional-643455 kubelet[4901]: E1115 09:34:08.122242    4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="d9932061-6756-48e8-bb60-59001527b050"
	Nov 15 09:34:09 functional-643455 kubelet[4901]: E1115 09:34:09.122462    4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-gcsp4" podUID="bc09e4d8-f970-4eec-83b3-5662
106ad81f"
	Nov 15 09:34:14 functional-643455 kubelet[4901]: E1115 09:34:14.122120    4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-gq4vv" podUID="4e78954f-1256-46f8-8490-8d686648cde6"
	Nov 15 09:34:16 functional-643455 kubelet[4901]: E1115 09:34:16.121310    4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:1beed3ca46acebe9d3fb62e9067f03d05d5bfa97a00f30938a0a3580563272ad: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="2393f4a7-5ffe-4821-99e3-ea6552a163f7"
	Nov 15 09:34:21 functional-643455 kubelet[4901]: E1115 09:34:21.121321    4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-sx2nl" podUID="7ca07bab-7255-4c58-9def-d033a33120e9"
	Nov 15 09:34:21 functional-643455 kubelet[4901]: E1115 09:34:21.122179    4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="d9932061-6756-48e8-bb60-59001527b050"
	Nov 15 09:34:22 functional-643455 kubelet[4901]: E1115 09:34:22.121839    4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-gcsp4" podUID="bc09e4d8-f970-4eec-83b3-5662
106ad81f"
	Nov 15 09:34:28 functional-643455 kubelet[4901]: E1115 09:34:28.122144    4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-gq4vv" podUID="4e78954f-1256-46f8-8490-8d686648cde6"
	Nov 15 09:34:29 functional-643455 kubelet[4901]: E1115 09:34:29.121233    4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:1beed3ca46acebe9d3fb62e9067f03d05d5bfa97a00f30938a0a3580563272ad: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="2393f4a7-5ffe-4821-99e3-ea6552a163f7"
	Nov 15 09:34:33 functional-643455 kubelet[4901]: E1115 09:34:33.122170    4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-gcsp4" podUID="bc09e4d8-f970-4eec-83b3-5662
106ad81f"
	Nov 15 09:34:35 functional-643455 kubelet[4901]: E1115 09:34:35.121782    4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="d9932061-6756-48e8-bb60-59001527b050"
	Nov 15 09:34:36 functional-643455 kubelet[4901]: E1115 09:34:36.121688    4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-sx2nl" podUID="7ca07bab-7255-4c58-9def-d033a33120e9"
	Nov 15 09:34:43 functional-643455 kubelet[4901]: E1115 09:34:43.121809    4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-gq4vv" podUID="4e78954f-1256-46f8-8490-8d686648cde6"
	Nov 15 09:34:44 functional-643455 kubelet[4901]: E1115 09:34:44.121669    4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:1beed3ca46acebe9d3fb62e9067f03d05d5bfa97a00f30938a0a3580563272ad: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="2393f4a7-5ffe-4821-99e3-ea6552a163f7"
	
	
	==> storage-provisioner [564d4fabc270f8233361e6322badd95ab1ccf27337c2f9b7a77f6c63013f1f9b] <==
	W1115 09:34:21.128084       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:34:23.131312       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:34:23.135196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:34:25.137969       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:34:25.142859       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:34:27.146402       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:34:27.150000       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:34:29.152788       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:34:29.156340       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:34:31.159838       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:34:31.163908       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:34:33.167349       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:34:33.171763       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:34:35.175312       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:34:35.178838       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:34:37.182268       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:34:37.187098       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:34:39.190249       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:34:39.194144       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:34:41.196751       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:34:41.200437       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:34:43.203886       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:34:43.207792       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:34:45.211264       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:34:45.216613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [dd1dfa9b2e913da162f5d62e0505a76b211080b8cab95935e800b19c395cad29] <==
	I1115 09:28:35.787220       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1115 09:28:35.792470       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-643455 -n functional-643455
helpers_test.go:269: (dbg) Run:  kubectl --context functional-643455 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-sx2nl nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-gcsp4 kubernetes-dashboard-855c9754f9-gq4vv
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-643455 describe pod busybox-mount hello-node-75c85bcc94-sx2nl nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-gcsp4 kubernetes-dashboard-855c9754f9-gq4vv
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-643455 describe pod busybox-mount hello-node-75c85bcc94-sx2nl nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-gcsp4 kubernetes-dashboard-855c9754f9-gq4vv: exit status 1 (107.183694ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-643455/192.168.49.2
	Start Time:       Sat, 15 Nov 2025 09:29:35 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  containerd://2cda70e5609c48560b543ec240ea7b5b6dfeb79dc264b8dd459d7bba2c5947ff
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 15 Nov 2025 09:29:37 +0000
	      Finished:     Sat, 15 Nov 2025 09:29:37 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sgv2q (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-sgv2q:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m12s  default-scheduler  Successfully assigned default/busybox-mount to functional-643455
	  Normal  Pulling    5m11s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m10s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.562s (1.562s including waiting). Image size: 2395207 bytes.
	  Normal  Created    5m10s  kubelet            Created container: mount-munger
	  Normal  Started    5m10s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-sx2nl
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-643455/192.168.49.2
	Start Time:       Sat, 15 Nov 2025 09:29:22 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j5rgj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-j5rgj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  5m25s                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-sx2nl to functional-643455
	  Normal   Pulling    2m13s (x5 over 5m25s)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     2m13s (x5 over 5m24s)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m13s (x5 over 5m24s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    11s (x20 over 5m24s)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     11s (x20 over 5m24s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-643455/192.168.49.2
	Start Time:       Sat, 15 Nov 2025 09:29:20 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:  10.244.0.6
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lrftf (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-lrftf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  5m27s                  default-scheduler  Successfully assigned default/nginx-svc to functional-643455
	  Warning  Failed     5m25s                  kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m20s (x5 over 5m26s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     2m20s (x5 over 5m25s)  kubelet            Error: ErrImagePull
	  Warning  Failed     2m20s (x4 over 5m12s)  kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    12s (x21 over 5m24s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     12s (x21 over 5m24s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-643455/192.168.49.2
	Start Time:       Sat, 15 Nov 2025 09:29:22 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p5kkb (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-p5kkb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  5m25s                  default-scheduler  Successfully assigned default/sp-pod to functional-643455
	  Normal   Pulling    2m20s (x5 over 5m24s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     2m20s (x5 over 5m24s)  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:1beed3ca46acebe9d3fb62e9067f03d05d5bfa97a00f30938a0a3580563272ad: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m20s (x5 over 5m24s)  kubelet            Error: ErrImagePull
	  Warning  Failed     18s (x20 over 5m23s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    3s (x21 over 5m23s)    kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-gcsp4" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-gq4vv" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-643455 describe pod busybox-mount hello-node-75c85bcc94-sx2nl nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-gcsp4 kubernetes-dashboard-855c9754f9-gq4vv: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (302.22s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (367.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [84999da4-a54e-4aad-a769-67721e764651] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004027148s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-643455 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-643455 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-643455 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-643455 apply -f testdata/storage-provisioner/pod.yaml
I1115 09:29:22.762086  128258 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [2393f4a7-5ffe-4821-99e3-ea6552a163f7] Pending
helpers_test.go:352: "sp-pod" [2393f4a7-5ffe-4821-99e3-ea6552a163f7] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E1115 09:29:23.970394  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/addons-868580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:140: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 6m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-643455 -n functional-643455
functional_test_pvc_test.go:140: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-11-15 09:35:23.084564395 +0000 UTC m=+782.385465563
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-643455 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-643455 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-643455/192.168.49.2
Start Time:       Sat, 15 Nov 2025 09:29:22 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:  10.244.0.8
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p5kkb (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-p5kkb:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  6m1s                  default-scheduler  Successfully assigned default/sp-pod to functional-643455
Normal   Pulling    2m56s (x5 over 6m)    kubelet            Pulling image "docker.io/nginx"
Warning  Failed     2m56s (x5 over 6m)    kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:1beed3ca46acebe9d3fb62e9067f03d05d5bfa97a00f30938a0a3580563272ad: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     2m56s (x5 over 6m)    kubelet            Error: ErrImagePull
Warning  Failed     54s (x20 over 5m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    39s (x21 over 5m59s)  kubelet            Back-off pulling image "docker.io/nginx"
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-643455 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) Non-zero exit: kubectl --context functional-643455 logs sp-pod -n default: exit status 1 (69.847241ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:140: kubectl --context functional-643455 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-643455
helpers_test.go:243: (dbg) docker inspect functional-643455:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "75d4c555182ef259c4fe3cf0e40dc50aaa963f9faa0a719174698fca1b7fbe0f",
	        "Created": "2025-11-15T09:27:38.460289529Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 158671,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-15T09:27:38.49284776Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/75d4c555182ef259c4fe3cf0e40dc50aaa963f9faa0a719174698fca1b7fbe0f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/75d4c555182ef259c4fe3cf0e40dc50aaa963f9faa0a719174698fca1b7fbe0f/hostname",
	        "HostsPath": "/var/lib/docker/containers/75d4c555182ef259c4fe3cf0e40dc50aaa963f9faa0a719174698fca1b7fbe0f/hosts",
	        "LogPath": "/var/lib/docker/containers/75d4c555182ef259c4fe3cf0e40dc50aaa963f9faa0a719174698fca1b7fbe0f/75d4c555182ef259c4fe3cf0e40dc50aaa963f9faa0a719174698fca1b7fbe0f-json.log",
	        "Name": "/functional-643455",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-643455:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-643455",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "75d4c555182ef259c4fe3cf0e40dc50aaa963f9faa0a719174698fca1b7fbe0f",
	                "LowerDir": "/var/lib/docker/overlay2/b6fb531e75d0eea8076d7f643cf1d8c98b7ecbdafda46cdb359559dfe5e18da2-init/diff:/var/lib/docker/overlay2/dd55a3984a0401bbe9c47729dc0fec07395bf4daab8d10377766fb7a6cf0f6d2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b6fb531e75d0eea8076d7f643cf1d8c98b7ecbdafda46cdb359559dfe5e18da2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b6fb531e75d0eea8076d7f643cf1d8c98b7ecbdafda46cdb359559dfe5e18da2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b6fb531e75d0eea8076d7f643cf1d8c98b7ecbdafda46cdb359559dfe5e18da2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-643455",
	                "Source": "/var/lib/docker/volumes/functional-643455/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-643455",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-643455",
	                "name.minikube.sigs.k8s.io": "functional-643455",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "029c034dcecc64b8ccca91cb8f52a0ca277442aca7cd6409ecdd0fb513d4f17f",
	            "SandboxKey": "/var/run/docker/netns/029c034dcecc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-643455": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "be24d09662bb1f50ee771e52c11387b4f471476e50e89b32b3a29bd33fc96223",
	                    "EndpointID": "2c39da97daa59f3d6450a6acb87027688136e17fb9118a11649286155d98bd18",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "d6:72:d6:b1:0e:d6",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-643455",
	                        "75d4c555182e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-643455 -n functional-643455
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-643455 logs -n 25: (1.24659098s)
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                       ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-643455 ssh sudo umount -f /mount-9p                                                                    │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │                     │
	│ mount          │ -p functional-643455 /tmp/TestFunctionalparallelMountCmdVerifyCleanup751928891/001:/mount3 --alsologtostderr -v=1 │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │                     │
	│ mount          │ -p functional-643455 /tmp/TestFunctionalparallelMountCmdVerifyCleanup751928891/001:/mount2 --alsologtostderr -v=1 │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │                     │
	│ ssh            │ functional-643455 ssh findmnt -T /mount1                                                                          │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │                     │
	│ mount          │ -p functional-643455 /tmp/TestFunctionalparallelMountCmdVerifyCleanup751928891/001:/mount1 --alsologtostderr -v=1 │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │                     │
	│ ssh            │ functional-643455 ssh findmnt -T /mount1                                                                          │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │ 15 Nov 25 09:29 UTC │
	│ ssh            │ functional-643455 ssh findmnt -T /mount2                                                                          │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │ 15 Nov 25 09:29 UTC │
	│ ssh            │ functional-643455 ssh findmnt -T /mount3                                                                          │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │ 15 Nov 25 09:29 UTC │
	│ mount          │ -p functional-643455 --kill=true                                                                                  │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │                     │
	│ addons         │ functional-643455 addons list                                                                                     │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │ 15 Nov 25 09:29 UTC │
	│ addons         │ functional-643455 addons list -o json                                                                             │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │ 15 Nov 25 09:29 UTC │
	│ start          │ -p functional-643455 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd   │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │                     │
	│ start          │ -p functional-643455 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd             │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │                     │
	│ start          │ -p functional-643455 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd   │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-643455 --alsologtostderr -v=1                                                    │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:29 UTC │                     │
	│ update-context │ functional-643455 update-context --alsologtostderr -v=2                                                           │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:34 UTC │ 15 Nov 25 09:34 UTC │
	│ update-context │ functional-643455 update-context --alsologtostderr -v=2                                                           │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:34 UTC │ 15 Nov 25 09:34 UTC │
	│ update-context │ functional-643455 update-context --alsologtostderr -v=2                                                           │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:34 UTC │ 15 Nov 25 09:34 UTC │
	│ image          │ functional-643455 image ls --format short --alsologtostderr                                                       │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:34 UTC │ 15 Nov 25 09:34 UTC │
	│ image          │ functional-643455 image ls --format yaml --alsologtostderr                                                        │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:34 UTC │ 15 Nov 25 09:34 UTC │
	│ ssh            │ functional-643455 ssh pgrep buildkitd                                                                             │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:34 UTC │                     │
	│ image          │ functional-643455 image build -t localhost/my-image:functional-643455 testdata/build --alsologtostderr            │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:34 UTC │ 15 Nov 25 09:34 UTC │
	│ image          │ functional-643455 image ls                                                                                        │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:34 UTC │ 15 Nov 25 09:34 UTC │
	│ image          │ functional-643455 image ls --format json --alsologtostderr                                                        │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:34 UTC │ 15 Nov 25 09:34 UTC │
	│ image          │ functional-643455 image ls --format table --alsologtostderr                                                       │ functional-643455 │ jenkins │ v1.37.0 │ 15 Nov 25 09:34 UTC │ 15 Nov 25 09:34 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 09:29:44
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 09:29:44.875838  174988 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:29:44.875936  174988 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:29:44.875944  174988 out.go:374] Setting ErrFile to fd 2...
	I1115 09:29:44.875957  174988 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:29:44.876325  174988 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-124770/.minikube/bin
	I1115 09:29:44.876748  174988 out.go:368] Setting JSON to false
	I1115 09:29:44.877812  174988 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":15135,"bootTime":1763183850,"procs":245,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 09:29:44.877921  174988 start.go:143] virtualization: kvm guest
	I1115 09:29:44.880159  174988 out.go:179] * [functional-643455] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1115 09:29:44.881646  174988 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 09:29:44.881678  174988 notify.go:221] Checking for updates...
	I1115 09:29:44.884009  174988 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 09:29:44.885173  174988 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-124770/kubeconfig
	I1115 09:29:44.886339  174988 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-124770/.minikube
	I1115 09:29:44.887594  174988 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 09:29:44.888818  174988 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 09:29:44.890443  174988 config.go:182] Loaded profile config "functional-643455": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1115 09:29:44.890911  174988 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 09:29:44.915414  174988 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
	I1115 09:29:44.915506  174988 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:29:44.974874  174988 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-15 09:29:44.965206839 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 09:29:44.974982  174988 docker.go:319] overlay module found
	I1115 09:29:44.976788  174988 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1115 09:29:44.978139  174988 start.go:309] selected driver: docker
	I1115 09:29:44.978155  174988 start.go:930] validating driver "docker" against &{Name:functional-643455 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-643455 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:29:44.978254  174988 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 09:29:44.980009  174988 out.go:203] 
	W1115 09:29:44.981297  174988 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1115 09:29:44.982533  174988 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	2cda70e5609c4       56cc512116c8f       5 minutes ago       Exited              mount-munger              0                   b835aa6cdf85b       busybox-mount                               default
	17a282b1fa4c9       5107333e08a87       6 minutes ago       Running             mysql                     0                   bb3d9dcf11838       mysql-5bb876957f-5bd4x                      default
	bf75b4fead77d       9056ab77afb8e       6 minutes ago       Running             echo-server               0                   0a593cd9578a6       hello-node-connect-7d85dfc575-q2qtv         default
	564d4fabc270f       6e38f40d628db       6 minutes ago       Running             storage-provisioner       2                   7280b209c4a1e       storage-provisioner                         kube-system
	59b2e611066bb       c80c8dbafe7dd       6 minutes ago       Running             kube-controller-manager   2                   488ecac322c4f       kube-controller-manager-functional-643455   kube-system
	babc27772525c       c3994bc696102       6 minutes ago       Running             kube-apiserver            0                   26840f129c94e       kube-apiserver-functional-643455            kube-system
	20e7221441e30       5f1f5298c888d       6 minutes ago       Running             etcd                      1                   f97b5bca4f6a7       etcd-functional-643455                      kube-system
	fb18f66b9833e       409467f978b4a       6 minutes ago       Running             kindnet-cni               1                   bfebc994070f2       kindnet-9ck6k                               kube-system
	0eea2114b1571       fc25172553d79       6 minutes ago       Running             kube-proxy                1                   6075525d36525       kube-proxy-nwjjp                            kube-system
	cc9efc6fc9059       c80c8dbafe7dd       6 minutes ago       Exited              kube-controller-manager   1                   488ecac322c4f       kube-controller-manager-functional-643455   kube-system
	4e1710787e24e       7dd6aaa1717ab       6 minutes ago       Running             kube-scheduler            1                   558179c3009ad       kube-scheduler-functional-643455            kube-system
	dd1dfa9b2e913       6e38f40d628db       6 minutes ago       Exited              storage-provisioner       1                   7280b209c4a1e       storage-provisioner                         kube-system
	ec35552550ecd       52546a367cc9e       6 minutes ago       Running             coredns                   1                   d3841618a2a6f       coredns-66bc5c9577-gslgg                    kube-system
	71224bce65213       52546a367cc9e       7 minutes ago       Exited              coredns                   0                   d3841618a2a6f       coredns-66bc5c9577-gslgg                    kube-system
	4c8deb830b3c4       409467f978b4a       7 minutes ago       Exited              kindnet-cni               0                   bfebc994070f2       kindnet-9ck6k                               kube-system
	12df49b0bdbf1       fc25172553d79       7 minutes ago       Exited              kube-proxy                0                   6075525d36525       kube-proxy-nwjjp                            kube-system
	81fd6e38ed44f       7dd6aaa1717ab       7 minutes ago       Exited              kube-scheduler            0                   558179c3009ad       kube-scheduler-functional-643455            kube-system
	dc71f2be2fc35       5f1f5298c888d       7 minutes ago       Exited              etcd                      0                   f97b5bca4f6a7       etcd-functional-643455                      kube-system
	
	
	==> containerd <==
	Nov 15 09:34:50 functional-643455 containerd[3858]: time="2025-11-15T09:34:50.961097306Z" level=warning msg="cleaning up after shim disconnected" id=rxzo5in6twx7njvazyl389527 namespace=k8s.io
	Nov 15 09:34:50 functional-643455 containerd[3858]: time="2025-11-15T09:34:50.961121817Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Nov 15 09:34:51 functional-643455 containerd[3858]: time="2025-11-15T09:34:51.104733277Z" level=info msg="ImageCreate event name:\"localhost/my-image:functional-643455\""
	Nov 15 09:34:51 functional-643455 containerd[3858]: time="2025-11-15T09:34:51.108466821Z" level=info msg="ImageCreate event name:\"sha256:93823aa38741e4cee9447ab46ca7d8340b95896052224069b50acb58a8bed831\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 15 09:34:51 functional-643455 containerd[3858]: time="2025-11-15T09:34:51.109125199Z" level=info msg="ImageUpdate event name:\"localhost/my-image:functional-643455\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 15 09:35:10 functional-643455 containerd[3858]: time="2025-11-15T09:35:10.121660946Z" level=info msg="PullImage \"docker.io/nginx:latest\""
	Nov 15 09:35:10 functional-643455 containerd[3858]: time="2025-11-15T09:35:10.123257425Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Nov 15 09:35:10 functional-643455 containerd[3858]: time="2025-11-15T09:35:10.201493293Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Nov 15 09:35:10 functional-643455 containerd[3858]: time="2025-11-15T09:35:10.357778473Z" level=error msg="PullImage \"docker.io/nginx:latest\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:bd1578eec775d0b28fd7f664b182b7e1fb75f1dd09f92d865dababe8525dfe8b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 15 09:35:10 functional-643455 containerd[3858]: time="2025-11-15T09:35:10.357853392Z" level=info msg="stop pulling image docker.io/library/nginx:latest: active requests=0, bytes read=21196"
	Nov 15 09:35:13 functional-643455 containerd[3858]: time="2025-11-15T09:35:13.122182346Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Nov 15 09:35:13 functional-643455 containerd[3858]: time="2025-11-15T09:35:13.123841293Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Nov 15 09:35:13 functional-643455 containerd[3858]: time="2025-11-15T09:35:13.182342251Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Nov 15 09:35:13 functional-643455 containerd[3858]: time="2025-11-15T09:35:13.267938916Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 15 09:35:13 functional-643455 containerd[3858]: time="2025-11-15T09:35:13.268064074Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11047"
	Nov 15 09:35:14 functional-643455 containerd[3858]: time="2025-11-15T09:35:14.123505439Z" level=info msg="PullImage \"docker.io/nginx:alpine\""
	Nov 15 09:35:14 functional-643455 containerd[3858]: time="2025-11-15T09:35:14.125145215Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Nov 15 09:35:14 functional-643455 containerd[3858]: time="2025-11-15T09:35:14.185618813Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Nov 15 09:35:14 functional-643455 containerd[3858]: time="2025-11-15T09:35:14.262588043Z" level=error msg="PullImage \"docker.io/nginx:alpine\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 15 09:35:14 functional-643455 containerd[3858]: time="2025-11-15T09:35:14.262632124Z" level=info msg="stop pulling image docker.io/library/nginx:alpine: active requests=0, bytes read=10967"
	Nov 15 09:35:16 functional-643455 containerd[3858]: time="2025-11-15T09:35:16.121298944Z" level=info msg="PullImage \"kicbase/echo-server:latest\""
	Nov 15 09:35:16 functional-643455 containerd[3858]: time="2025-11-15T09:35:16.123143410Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Nov 15 09:35:16 functional-643455 containerd[3858]: time="2025-11-15T09:35:16.184834293Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Nov 15 09:35:16 functional-643455 containerd[3858]: time="2025-11-15T09:35:16.271351605Z" level=error msg="PullImage \"kicbase/echo-server:latest\" failed" error="failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Nov 15 09:35:16 functional-643455 containerd[3858]: time="2025-11-15T09:35:16.271432134Z" level=info msg="stop pulling image docker.io/kicbase/echo-server:latest: active requests=0, bytes read=10999"
	
	
	==> coredns [71224bce65213776a3058b9d9b685001f8515f08b6b57cb996061ae7af3d144b] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42898 - 36145 "HINFO IN 8984940392331241906.8485209873416469064. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.045622994s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ec35552550ecdcc3b355ec8adfa48f77638c99f67e22e267a5a5312cda6d6e69] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38627 - 823 "HINFO IN 155197805458491775.8394333180523951329. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.088504316s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               functional-643455
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-643455
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0dfcbc84b0746df72f342b95a4fedfa3ccdd9510
	                    minikube.k8s.io/name=functional-643455
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_15T09_27_54_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 15 Nov 2025 09:27:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-643455
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 15 Nov 2025 09:35:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 15 Nov 2025 09:35:17 +0000   Sat, 15 Nov 2025 09:27:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 15 Nov 2025 09:35:17 +0000   Sat, 15 Nov 2025 09:27:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 15 Nov 2025 09:35:17 +0000   Sat, 15 Nov 2025 09:27:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 15 Nov 2025 09:35:17 +0000   Sat, 15 Nov 2025 09:28:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-643455
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                9f8a9454-d3ff-4e20-a36e-cf2efe1bcbc9
	  Boot ID:                    fbc9987d-de80-43b3-8f69-13458401c4dd
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.28
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-sx2nl                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  default                     hello-node-connect-7d85dfc575-q2qtv           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m11s
	  default                     mysql-5bb876957f-5bd4x                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     6m10s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m4s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  kube-system                 coredns-66bc5c9577-gslgg                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     7m25s
	  kube-system                 etcd-functional-643455                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7m31s
	  kube-system                 kindnet-9ck6k                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      7m25s
	  kube-system                 kube-apiserver-functional-643455              250m (3%)     0 (0%)      0 (0%)           0 (0%)         6m34s
	  kube-system                 kube-controller-manager-functional-643455     200m (2%)     0 (0%)      0 (0%)           0 (0%)         7m31s
	  kube-system                 kube-proxy-nwjjp                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m25s
	  kube-system                 kube-scheduler-functional-643455              100m (1%)     0 (0%)      0 (0%)           0 (0%)         7m31s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m25s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-gcsp4    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m39s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-gq4vv         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m24s                  kube-proxy       
	  Normal  Starting                 6m28s                  kube-proxy       
	  Normal  NodeHasSufficientPID     7m31s                  kubelet          Node functional-643455 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m31s                  kubelet          Node functional-643455 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m31s                  kubelet          Node functional-643455 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 7m31s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           7m26s                  node-controller  Node functional-643455 event: Registered Node functional-643455 in Controller
	  Normal  NodeReady                7m14s                  kubelet          Node functional-643455 status is now: NodeReady
	  Normal  Starting                 6m37s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m37s (x8 over 6m37s)  kubelet          Node functional-643455 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m37s (x8 over 6m37s)  kubelet          Node functional-643455 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m37s (x7 over 6m37s)  kubelet          Node functional-643455 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m33s                  node-controller  Node functional-643455 event: Registered Node functional-643455 in Controller
	
	
	==> dmesg <==
	
	
	==> etcd [20e7221441e30ddff73a233e0fb39c7859b8cdcf308f699b3da6d4ea14757f97] <==
	{"level":"warn","ts":"2025-11-15T09:28:48.547207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:28:48.555891Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:28:48.561820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:28:48.568579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:28:48.574820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:28:48.580802Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:28:48.587893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:28:48.593831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:28:48.601012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:28:48.617260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:28:48.623700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:28:48.629882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:28:48.637301Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:28:48.643481Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:28:48.649905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:28:48.656858Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:28:48.663547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:28:48.669659Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:28:48.676825Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:28:48.682971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:28:48.689137Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:28:48.707604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:28:48.714647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:28:48.721973Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:28:48.772471Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50872","server-name":"","error":"EOF"}
	
	
	==> etcd [dc71f2be2fc35f940d08e52670de1d4a1226f5ed51724f2c27632ec3469c374d] <==
	{"level":"warn","ts":"2025-11-15T09:27:50.760606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:27:50.767815Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:27:50.773822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:27:50.794096Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:27:50.801683Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:27:50.810153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-15T09:27:50.859079Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35488","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-15T09:28:45.349803Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-15T09:28:45.349978Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-643455","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-11-15T09:28:45.350141Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-15T09:28:45.351775Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-15T09:28:45.353125Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-15T09:28:45.353180Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-11-15T09:28:45.353221Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-11-15T09:28:45.353283Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-15T09:28:45.353293Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-11-15T09:28:45.353298Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-15T09:28:45.353318Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-15T09:28:45.353243Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-15T09:28:45.353342Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-15T09:28:45.353354Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-15T09:28:45.355660Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-15T09:28:45.355740Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-15T09:28:45.355771Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-15T09:28:45.355789Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-643455","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 09:35:24 up  4:17,  0 user,  load average: 0.38, 0.71, 1.61
	Linux functional-643455 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4c8deb830b3c40c4c2e7460472b20a8a34868b3c6ed2a2b28e8e2eb708d19b1e] <==
	I1115 09:28:00.290015       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1115 09:28:00.290318       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1115 09:28:00.290463       1 main.go:148] setting mtu 1500 for CNI 
	I1115 09:28:00.290482       1 main.go:178] kindnetd IP family: "ipv4"
	I1115 09:28:00.290506       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-15T09:28:00Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1115 09:28:00.492641       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1115 09:28:00.492730       1 controller.go:381] "Waiting for informer caches to sync"
	I1115 09:28:00.492745       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1115 09:28:00.493004       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1115 09:28:00.885572       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1115 09:28:00.885610       1 metrics.go:72] Registering metrics
	I1115 09:28:00.885711       1 controller.go:711] "Syncing nftables rules"
	I1115 09:28:10.494295       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:28:10.494381       1 main.go:301] handling current node
	I1115 09:28:20.498719       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:28:20.498760       1 main.go:301] handling current node
	I1115 09:28:30.497908       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:28:30.497939       1 main.go:301] handling current node
	
	
	==> kindnet [fb18f66b9833e3dde538053ed3f57dd6dfdb05cb5a04a8703272118f19fe0bd1] <==
	I1115 09:33:16.194812       1 main.go:301] handling current node
	I1115 09:33:26.191188       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:33:26.191228       1 main.go:301] handling current node
	I1115 09:33:36.195146       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:33:36.195182       1 main.go:301] handling current node
	I1115 09:33:46.199914       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:33:46.199951       1 main.go:301] handling current node
	I1115 09:33:56.191660       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:33:56.191707       1 main.go:301] handling current node
	I1115 09:34:06.199686       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:34:06.199727       1 main.go:301] handling current node
	I1115 09:34:16.198199       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:34:16.198236       1 main.go:301] handling current node
	I1115 09:34:26.191572       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:34:26.191604       1 main.go:301] handling current node
	I1115 09:34:36.193216       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:34:36.193255       1 main.go:301] handling current node
	I1115 09:34:46.197401       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:34:46.197436       1 main.go:301] handling current node
	I1115 09:34:56.191079       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:34:56.191138       1 main.go:301] handling current node
	I1115 09:35:06.200206       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:35:06.200255       1 main.go:301] handling current node
	I1115 09:35:16.196220       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1115 09:35:16.196257       1 main.go:301] handling current node
	
	
	==> kube-apiserver [babc27772525c961baf898d5c14615a30cff6db31c2bcaed456c0b27dbbaeeb8] <==
	I1115 09:28:49.239447       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1115 09:28:49.262732       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1115 09:28:50.141810       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1115 09:28:50.268508       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 09:28:50.268508       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1115 09:28:50.268508       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W1115 09:28:50.443490       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1115 09:28:50.444740       1 controller.go:667] quota admission added evaluator for: endpoints
	I1115 09:28:50.449478       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1115 09:28:50.987214       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1115 09:28:51.077837       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1115 09:28:51.125716       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1115 09:28:51.137090       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1115 09:28:56.155591       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1115 09:29:08.941026       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.100.120.65"}
	I1115 09:29:13.445263       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.106.91.133"}
	I1115 09:29:14.778145       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.99.53.42"}
	I1115 09:29:20.827479       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.97.14.196"}
	I1115 09:29:22.112518       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.96.127.55"}
	E1115 09:29:28.953220       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:39926: use of closed network connection
	E1115 09:29:30.359901       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:39944: use of closed network connection
	E1115 09:29:32.570122       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:39958: use of closed network connection
	I1115 09:29:45.819201       1 controller.go:667] quota admission added evaluator for: namespaces
	I1115 09:29:45.953077       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.187.89"}
	I1115 09:29:45.964682       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.163.26"}
	
	
	==> kube-controller-manager [59b2e611066bb26cf54b4e22ea3bff8df16074d96a0d58f8bca35318b1d8397e] <==
	I1115 09:28:51.985933       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1115 09:28:51.985979       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1115 09:28:51.986098       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1115 09:28:51.986413       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1115 09:28:51.986426       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1115 09:28:51.986415       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1115 09:28:51.986509       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1115 09:28:51.987007       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1115 09:28:51.987074       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1115 09:28:51.987524       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1115 09:28:51.988575       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1115 09:28:51.991276       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 09:28:51.994197       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1115 09:28:51.994259       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1115 09:28:51.994301       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1115 09:28:51.994309       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1115 09:28:51.994315       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1115 09:28:52.002224       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1115 09:28:52.012283       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1115 09:29:45.867793       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1115 09:29:45.871537       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1115 09:29:45.874791       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1115 09:29:45.876115       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1115 09:29:45.878691       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1115 09:29:45.884312       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [cc9efc6fc9059c7ecb39bcd62cb964c9b28d22237804da685ab2e20045fee203] <==
	I1115 09:28:36.434971       1 serving.go:386] Generated self-signed cert in-memory
	I1115 09:28:37.179644       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1115 09:28:37.179668       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 09:28:37.181112       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1115 09:28:37.181116       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1115 09:28:37.181432       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1115 09:28:37.181459       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1115 09:28:47.182963       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-proxy [0eea2114b1571ce0a888ea43435cf1aaf3f9357fdb10b1195e8c51c681f176e2] <==
	I1115 09:28:35.926903       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1115 09:28:35.927913       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-643455&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1115 09:28:36.929125       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-643455&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1115 09:28:38.805776       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-643455&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1115 09:28:44.306372       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-643455&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1115 09:28:55.327547       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 09:28:55.327588       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1115 09:28:55.327664       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 09:28:55.349670       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 09:28:55.349735       1 server_linux.go:132] "Using iptables Proxier"
	I1115 09:28:55.355434       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 09:28:55.355930       1 server.go:527] "Version info" version="v1.34.1"
	I1115 09:28:55.355962       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 09:28:55.358227       1 config.go:200] "Starting service config controller"
	I1115 09:28:55.358308       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 09:28:55.358373       1 config.go:106] "Starting endpoint slice config controller"
	I1115 09:28:55.358380       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 09:28:55.358255       1 config.go:309] "Starting node config controller"
	I1115 09:28:55.358404       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 09:28:55.358411       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 09:28:55.358694       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 09:28:55.358709       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 09:28:55.458500       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 09:28:55.460010       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1115 09:28:55.460039       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [12df49b0bdbf13f0052ec752866e2308cdebef7eb02aa3c3f90bad04188baeb6] <==
	I1115 09:27:59.895569       1 server_linux.go:53] "Using iptables proxy"
	I1115 09:27:59.967964       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1115 09:28:00.068553       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1115 09:28:00.068616       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1115 09:28:00.068733       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1115 09:28:00.089484       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1115 09:28:00.089545       1 server_linux.go:132] "Using iptables Proxier"
	I1115 09:28:00.094834       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1115 09:28:00.095367       1 server.go:527] "Version info" version="v1.34.1"
	I1115 09:28:00.095411       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1115 09:28:00.097116       1 config.go:200] "Starting service config controller"
	I1115 09:28:00.097147       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1115 09:28:00.097187       1 config.go:106] "Starting endpoint slice config controller"
	I1115 09:28:00.097193       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1115 09:28:00.097216       1 config.go:403] "Starting serviceCIDR config controller"
	I1115 09:28:00.097304       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1115 09:28:00.097364       1 config.go:309] "Starting node config controller"
	I1115 09:28:00.097375       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1115 09:28:00.097382       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1115 09:28:00.197347       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1115 09:28:00.197347       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1115 09:28:00.197618       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [4e1710787e24ed725efd1baf8185da908cde84b9609641698e1063153aac9e5e] <==
	E1115 09:28:41.170473       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 09:28:41.178915       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1115 09:28:41.351173       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1115 09:28:41.407919       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1115 09:28:41.450528       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1115 09:28:43.572781       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1115 09:28:44.247390       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1115 09:28:44.491689       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1115 09:28:44.567545       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1115 09:28:44.840315       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1115 09:28:44.890860       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1115 09:28:44.891434       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1115 09:28:45.313304       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1115 09:28:45.522511       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1115 09:28:45.726307       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1115 09:28:45.758983       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1115 09:28:46.185391       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 09:28:46.200407       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1115 09:28:46.213130       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1115 09:28:46.246730       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1115 09:28:46.326995       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1115 09:28:46.369849       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1115 09:28:46.787615       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1115 09:28:47.489187       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1115 09:28:57.693703       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [81fd6e38ed44f24f83e30b9f760f68608a59e45ddfa53d48e689f61dc83a06fb] <==
	E1115 09:27:51.281612       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1115 09:27:51.281671       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1115 09:27:51.281741       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 09:27:51.281807       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1115 09:27:51.281864       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1115 09:27:51.285219       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1115 09:27:51.285423       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1115 09:27:52.086790       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1115 09:27:52.166397       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1115 09:27:52.204946       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1115 09:27:52.217198       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1115 09:27:52.244369       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1115 09:27:52.247394       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1115 09:27:52.316405       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1115 09:27:52.372527       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1115 09:27:52.468910       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1115 09:27:52.492229       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1115 09:27:52.632513       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1115 09:27:55.375996       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 09:28:35.165221       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1115 09:28:35.165251       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1115 09:28:35.165269       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1115 09:28:35.165363       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1115 09:28:35.165459       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1115 09:28:35.165486       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Nov 15 09:34:47 functional-643455 kubelet[4901]: E1115 09:34:47.122865    4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-gcsp4" podUID="bc09e4d8-f970-4eec-83b3-5662
106ad81f"
	Nov 15 09:34:47 functional-643455 kubelet[4901]: E1115 09:34:47.122865    4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="d9932061-6756-48e8-bb60-59001527b050"
	Nov 15 09:34:49 functional-643455 kubelet[4901]: E1115 09:34:49.121140    4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-sx2nl" podUID="7ca07bab-7255-4c58-9def-d033a33120e9"
	Nov 15 09:34:57 functional-643455 kubelet[4901]: E1115 09:34:57.122347    4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:1beed3ca46acebe9d3fb62e9067f03d05d5bfa97a00f30938a0a3580563272ad: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="2393f4a7-5ffe-4821-99e3-ea6552a163f7"
	Nov 15 09:34:57 functional-643455 kubelet[4901]: E1115 09:34:57.123022    4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-gq4vv" podUID="4e78954f-1256-46f8-8490-8d686648cde6"
	Nov 15 09:34:59 functional-643455 kubelet[4901]: E1115 09:34:59.124915    4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="d9932061-6756-48e8-bb60-59001527b050"
	Nov 15 09:35:00 functional-643455 kubelet[4901]: E1115 09:35:00.122327    4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-gcsp4" podUID="bc09e4d8-f970-4eec-83b3-5662
106ad81f"
	Nov 15 09:35:01 functional-643455 kubelet[4901]: E1115 09:35:01.121763    4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-sx2nl" podUID="7ca07bab-7255-4c58-9def-d033a33120e9"
	Nov 15 09:35:10 functional-643455 kubelet[4901]: E1115 09:35:10.122087    4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-gq4vv" podUID="4e78954f-1256-46f8-8490-8d686648cde6"
	Nov 15 09:35:10 functional-643455 kubelet[4901]: E1115 09:35:10.358098    4901 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:bd1578eec775d0b28fd7f664b182b7e1fb75f1dd09f92d865dababe8525dfe8b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Nov 15 09:35:10 functional-643455 kubelet[4901]: E1115 09:35:10.358154    4901 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:bd1578eec775d0b28fd7f664b182b7e1fb75f1dd09f92d865dababe8525dfe8b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Nov 15 09:35:10 functional-643455 kubelet[4901]: E1115 09:35:10.358236    4901 kuberuntime_manager.go:1449] "Unhandled Error" err="container myfrontend start failed in pod sp-pod_default(2393f4a7-5ffe-4821-99e3-ea6552a163f7): ErrImagePull: failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:bd1578eec775d0b28fd7f664b182b7e1fb75f1dd09f92d865dababe8525dfe8b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 15 09:35:10 functional-643455 kubelet[4901]: E1115 09:35:10.358272    4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:bd1578eec775d0b28fd7f664b182b7e1fb75f1dd09f92d865dababe8525dfe8b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="2393f4a7-5ffe-4821-99e3-ea6552a163f7"
	Nov 15 09:35:13 functional-643455 kubelet[4901]: E1115 09:35:13.268304    4901 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Nov 15 09:35:13 functional-643455 kubelet[4901]: E1115 09:35:13.268373    4901 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Nov 15 09:35:13 functional-643455 kubelet[4901]: E1115 09:35:13.268486    4901 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-77bf4d6c4c-gcsp4_kubernetes-dashboard(bc09e4d8-f970-4eec-83b3-5662106ad81f): ErrImagePull: failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 15 09:35:13 functional-643455 kubelet[4901]: E1115 09:35:13.268531    4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-gcsp4" podUID="bc09e4d8-f970-4eec-83b3-5662106ad81f"
	Nov 15 09:35:14 functional-643455 kubelet[4901]: E1115 09:35:14.262887    4901 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Nov 15 09:35:14 functional-643455 kubelet[4901]: E1115 09:35:14.262962    4901 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Nov 15 09:35:14 functional-643455 kubelet[4901]: E1115 09:35:14.263049    4901 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod nginx-svc_default(d9932061-6756-48e8-bb60-59001527b050): ErrImagePull: failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 15 09:35:14 functional-643455 kubelet[4901]: E1115 09:35:14.263100    4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="d9932061-6756-48e8-bb60-59001527b050"
	Nov 15 09:35:16 functional-643455 kubelet[4901]: E1115 09:35:16.271637    4901 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Nov 15 09:35:16 functional-643455 kubelet[4901]: E1115 09:35:16.271721    4901 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Nov 15 09:35:16 functional-643455 kubelet[4901]: E1115 09:35:16.271826    4901 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-75c85bcc94-sx2nl_default(7ca07bab-7255-4c58-9def-d033a33120e9): ErrImagePull: failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 15 09:35:16 functional-643455 kubelet[4901]: E1115 09:35:16.271876    4901 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-sx2nl" podUID="7ca07bab-7255-4c58-9def-d033a33120e9"
	
	
	==> storage-provisioner [564d4fabc270f8233361e6322badd95ab1ccf27337c2f9b7a77f6c63013f1f9b] <==
	W1115 09:34:59.273919       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:35:01.277043       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:35:01.281999       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:35:03.286069       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:35:03.290470       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:35:05.293764       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:35:05.297772       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:35:07.301234       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:35:07.304872       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:35:09.308200       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:35:09.313081       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:35:11.316379       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:35:11.320296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:35:13.324128       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:35:13.329242       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:35:15.332567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:35:15.336411       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:35:17.339983       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:35:17.344602       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:35:19.348260       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:35:19.352127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:35:21.355353       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:35:21.359129       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:35:23.362434       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1115 09:35:23.366218       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [dd1dfa9b2e913da162f5d62e0505a76b211080b8cab95935e800b19c395cad29] <==
	I1115 09:28:35.787220       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1115 09:28:35.792470       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-643455 -n functional-643455
helpers_test.go:269: (dbg) Run:  kubectl --context functional-643455 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-sx2nl nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-gcsp4 kubernetes-dashboard-855c9754f9-gq4vv
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-643455 describe pod busybox-mount hello-node-75c85bcc94-sx2nl nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-gcsp4 kubernetes-dashboard-855c9754f9-gq4vv
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-643455 describe pod busybox-mount hello-node-75c85bcc94-sx2nl nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-gcsp4 kubernetes-dashboard-855c9754f9-gq4vv: exit status 1 (90.965899ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-643455/192.168.49.2
	Start Time:       Sat, 15 Nov 2025 09:29:35 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  containerd://2cda70e5609c48560b543ec240ea7b5b6dfeb79dc264b8dd459d7bba2c5947ff
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 15 Nov 2025 09:29:37 +0000
	      Finished:     Sat, 15 Nov 2025 09:29:37 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sgv2q (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-sgv2q:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m50s  default-scheduler  Successfully assigned default/busybox-mount to functional-643455
	  Normal  Pulling    5m49s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m48s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.562s (1.562s including waiting). Image size: 2395207 bytes.
	  Normal  Created    5m48s  kubelet            Created container: mount-munger
	  Normal  Started    5m48s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-sx2nl
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-643455/192.168.49.2
	Start Time:       Sat, 15 Nov 2025 09:29:22 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j5rgj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-j5rgj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m3s                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-sx2nl to functional-643455
	  Normal   Pulling    2m51s (x5 over 6m3s)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     2m51s (x5 over 6m2s)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m51s (x5 over 6m2s)  kubelet            Error: ErrImagePull
	  Warning  Failed     49s (x20 over 6m2s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    36s (x21 over 6m2s)   kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-643455/192.168.49.2
	Start Time:       Sat, 15 Nov 2025 09:29:20 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:  10.244.0.6
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lrftf (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-lrftf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  6m5s                   default-scheduler  Successfully assigned default/nginx-svc to functional-643455
	  Warning  Failed     6m3s                   kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m58s (x5 over 6m4s)   kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     2m58s (x5 over 6m3s)   kubelet            Error: ErrImagePull
	  Warning  Failed     2m58s (x4 over 5m50s)  kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    50s (x21 over 6m2s)    kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     50s (x21 over 6m2s)    kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-643455/192.168.49.2
	Start Time:       Sat, 15 Nov 2025 09:29:22 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p5kkb (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-p5kkb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m3s                  default-scheduler  Successfully assigned default/sp-pod to functional-643455
	  Normal   Pulling    2m58s (x5 over 6m2s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     2m58s (x5 over 6m2s)  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:1beed3ca46acebe9d3fb62e9067f03d05d5bfa97a00f30938a0a3580563272ad: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m58s (x5 over 6m2s)  kubelet            Error: ErrImagePull
	  Warning  Failed     56s (x20 over 6m1s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    41s (x21 over 6m1s)   kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-gcsp4" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-gq4vv" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-643455 describe pod busybox-mount hello-node-75c85bcc94-sx2nl nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-gcsp4 kubernetes-dashboard-855c9754f9-gq4vv: exit status 1
E1115 09:39:13.714142  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/addons-868580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (367.92s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-643455 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [d9932061-6756-48e8-bb60-59001527b050] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:337: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: WARNING: pod list for "default" "run=nginx-svc" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_tunnel_test.go:216: ***** TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: pod "run=nginx-svc" failed to start within 4m0s: context deadline exceeded ****
functional_test_tunnel_test.go:216: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-643455 -n functional-643455
functional_test_tunnel_test.go:216: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: showing logs for failed pods as of 2025-11-15 09:33:21.163164099 +0000 UTC m=+660.464065271
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-643455 describe po nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) kubectl --context functional-643455 describe po nginx-svc -n default:
Name:             nginx-svc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-643455/192.168.49.2
Start Time:       Sat, 15 Nov 2025 09:29:20 +0000
Labels:           run=nginx-svc
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:  10.244.0.6
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lrftf (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-lrftf:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  4m1s                 default-scheduler  Successfully assigned default/nginx-svc to functional-643455
Warning  Failed     3m59s                kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    54s (x5 over 4m)     kubelet            Pulling image "docker.io/nginx:alpine"
Warning  Failed     54s (x5 over 3m59s)  kubelet            Error: ErrImagePull
Warning  Failed     54s (x4 over 3m46s)  kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    3s (x15 over 3m58s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     3s (x15 over 3m58s)  kubelet            Error: ImagePullBackOff
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-643455 logs nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) Non-zero exit: kubectl --context functional-643455 logs nginx-svc -n default: exit status 1 (68.197872ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx-svc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:216: kubectl --context functional-643455 logs nginx-svc -n default: exit status 1
functional_test_tunnel_test.go:217: wait: run=nginx-svc within 4m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.69s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-643455 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-643455 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-sx2nl" [7ca07bab-7255-4c58-9def-d033a33120e9] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-643455 -n functional-643455
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-11-15 09:39:22.45711557 +0000 UTC m=+1021.758016732
functional_test.go:1460: (dbg) Run:  kubectl --context functional-643455 describe po hello-node-75c85bcc94-sx2nl -n default
functional_test.go:1460: (dbg) kubectl --context functional-643455 describe po hello-node-75c85bcc94-sx2nl -n default:
Name:             hello-node-75c85bcc94-sx2nl
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-643455/192.168.49.2
Start Time:       Sat, 15 Nov 2025 09:29:22 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j5rgj (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-j5rgj:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-75c85bcc94-sx2nl to functional-643455
Normal   Pulling    6m48s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m48s (x5 over 9m59s)   kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     6m48s (x5 over 9m59s)   kubelet            Error: ErrImagePull
Warning  Failed     4m46s (x20 over 9m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m33s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-643455 logs hello-node-75c85bcc94-sx2nl -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-643455 logs hello-node-75c85bcc94-sx2nl -n default: exit status 1 (67.095471ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-sx2nl" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-643455 logs hello-node-75c85bcc94-sx2nl -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.62s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (107.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
I1115 09:33:21.297527  128258 retry.go:31] will retry after 3.381849764s: Temporary Error: Get "http:": http: no Host in request URL
I1115 09:33:24.679730  128258 retry.go:31] will retry after 4.985522042s: Temporary Error: Get "http:": http: no Host in request URL
I1115 09:33:29.666417  128258 retry.go:31] will retry after 5.016502994s: Temporary Error: Get "http:": http: no Host in request URL
I1115 09:33:34.683251  128258 retry.go:31] will retry after 13.4663622s: Temporary Error: Get "http:": http: no Host in request URL
I1115 09:33:48.150613  128258 retry.go:31] will retry after 20.556393697s: Temporary Error: Get "http:": http: no Host in request URL
I1115 09:34:08.707272  128258 retry.go:31] will retry after 18.925810494s: Temporary Error: Get "http:": http: no Host in request URL
E1115 09:34:13.713612  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/addons-868580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1115 09:34:27.634150  128258 retry.go:31] will retry after 40.623780189s: Temporary Error: Get "http:": http: no Host in request URL
E1115 09:34:41.419597  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/addons-868580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-643455 get svc nginx-svc
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
NAME        TYPE           CLUSTER-IP     EXTERNAL-IP    PORT(S)        AGE
nginx-svc   LoadBalancer   10.97.14.196   10.97.14.196   80:32587/TCP   5m48s
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (107.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-643455 service --namespace=default --https --url hello-node: exit status 115 (545.691198ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:32579
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-643455 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-643455 service hello-node --url --format={{.IP}}: exit status 115 (549.612797ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-643455 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-643455 service hello-node --url: exit status 115 (537.724159ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:32579
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-643455 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32579
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                    

Test pass (298/332)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 4.66
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.23
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 3.3
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.08
18 TestDownloadOnly/v1.34.1/DeleteAll 0.22
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
20 TestDownloadOnlyKic 0.41
21 TestBinaryMirror 0.81
22 TestOffline 52.85
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 102.55
29 TestAddons/serial/Volcano 38.22
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 8.46
35 TestAddons/parallel/Registry 14.41
36 TestAddons/parallel/RegistryCreds 0.7
37 TestAddons/parallel/Ingress 17.96
38 TestAddons/parallel/InspektorGadget 11.65
39 TestAddons/parallel/MetricsServer 5.68
41 TestAddons/parallel/CSI 44.65
42 TestAddons/parallel/Headlamp 16.62
43 TestAddons/parallel/CloudSpanner 5.51
44 TestAddons/parallel/LocalPath 8.19
45 TestAddons/parallel/NvidiaDevicePlugin 5.52
46 TestAddons/parallel/Yakd 10.67
47 TestAddons/parallel/AmdGpuDevicePlugin 5.49
48 TestAddons/StoppedEnableDisable 12.7
49 TestCertOptions 26.16
50 TestCertExpiration 217.06
52 TestForceSystemdFlag 24.54
53 TestForceSystemdEnv 35.91
54 TestDockerEnvContainerd 35
58 TestErrorSpam/setup 19.42
59 TestErrorSpam/start 0.67
60 TestErrorSpam/status 0.95
61 TestErrorSpam/pause 1.44
62 TestErrorSpam/unpause 1.54
63 TestErrorSpam/stop 12.07
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 40.17
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 6.66
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.12
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.52
75 TestFunctional/serial/CacheCmd/cache/add_local 0.87
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.56
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
83 TestFunctional/serial/ExtraConfig 40.27
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.23
86 TestFunctional/serial/LogsFileCmd 1.23
87 TestFunctional/serial/InvalidService 4.5
89 TestFunctional/parallel/ConfigCmd 0.47
91 TestFunctional/parallel/DryRun 0.4
92 TestFunctional/parallel/InternationalLanguage 0.17
93 TestFunctional/parallel/StatusCmd 1.06
97 TestFunctional/parallel/ServiceCmdConnect 7.66
98 TestFunctional/parallel/AddonsCmd 0.15
101 TestFunctional/parallel/SSHCmd 0.67
102 TestFunctional/parallel/CpCmd 1.92
103 TestFunctional/parallel/MySQL 17.94
104 TestFunctional/parallel/FileSync 0.31
105 TestFunctional/parallel/CertSync 1.9
109 TestFunctional/parallel/NodeLabels 0.07
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.64
113 TestFunctional/parallel/License 0.29
114 TestFunctional/parallel/Version/short 0.07
115 TestFunctional/parallel/Version/components 0.49
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
120 TestFunctional/parallel/ImageCommands/ImageBuild 2.73
121 TestFunctional/parallel/ImageCommands/Setup 0.41
122 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.15
123 TestFunctional/parallel/UpdateContextCmd/no_changes 0.19
124 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
125 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.23
127 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.38
128 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.38
129 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
130 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.64
131 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.43
133 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.44
134 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
138 TestFunctional/parallel/ProfileCmd/profile_not_create 0.41
139 TestFunctional/parallel/ProfileCmd/profile_list 0.4
140 TestFunctional/parallel/ProfileCmd/profile_json_output 0.39
141 TestFunctional/parallel/MountCmd/any-port 6.85
142 TestFunctional/parallel/MountCmd/specific-port 1.88
143 TestFunctional/parallel/MountCmd/VerifyCleanup 1.76
148 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
149 TestFunctional/parallel/ServiceCmd/List 1.71
150 TestFunctional/parallel/ServiceCmd/JSONOutput 1.7
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 116.74
162 TestMultiControlPlane/serial/DeployApp 5.35
163 TestMultiControlPlane/serial/PingHostFromPods 1.19
164 TestMultiControlPlane/serial/AddWorkerNode 23.93
165 TestMultiControlPlane/serial/NodeLabels 0.07
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.89
167 TestMultiControlPlane/serial/CopyFile 17.19
168 TestMultiControlPlane/serial/StopSecondaryNode 12.74
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.71
170 TestMultiControlPlane/serial/RestartSecondaryNode 9.08
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.88
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 98.74
173 TestMultiControlPlane/serial/DeleteSecondaryNode 9.27
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.7
175 TestMultiControlPlane/serial/StopCluster 36.17
176 TestMultiControlPlane/serial/RestartCluster 57.29
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.69
178 TestMultiControlPlane/serial/AddSecondaryNode 47.97
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.9
184 TestJSONOutput/start/Command 35.9
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/pause/Command 0.72
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/unpause/Command 0.6
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 5.85
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.24
209 TestKicCustomNetwork/create_custom_network 31.7
210 TestKicCustomNetwork/use_default_bridge_network 22.9
211 TestKicExistingNetwork 24.54
212 TestKicCustomSubnet 24.24
213 TestKicStaticIP 24.68
214 TestMainNoArgs 0.06
215 TestMinikubeProfile 47.47
218 TestMountStart/serial/StartWithMountFirst 4.51
219 TestMountStart/serial/VerifyMountFirst 0.27
220 TestMountStart/serial/StartWithMountSecond 4.96
221 TestMountStart/serial/VerifyMountSecond 0.27
222 TestMountStart/serial/DeleteFirst 1.67
223 TestMountStart/serial/VerifyMountPostDelete 0.27
224 TestMountStart/serial/Stop 1.26
225 TestMountStart/serial/RestartStopped 6.91
226 TestMountStart/serial/VerifyMountPostStop 0.27
229 TestMultiNode/serial/FreshStart2Nodes 62.29
230 TestMultiNode/serial/DeployApp2Nodes 4.34
231 TestMultiNode/serial/PingHostFrom2Pods 0.8
232 TestMultiNode/serial/AddNode 23.68
233 TestMultiNode/serial/MultiNodeLabels 0.06
234 TestMultiNode/serial/ProfileList 0.66
235 TestMultiNode/serial/CopyFile 9.89
236 TestMultiNode/serial/StopNode 2.28
237 TestMultiNode/serial/StartAfterStop 6.9
238 TestMultiNode/serial/RestartKeepsNodes 71.41
239 TestMultiNode/serial/DeleteNode 5.22
240 TestMultiNode/serial/StopMultiNode 24.01
241 TestMultiNode/serial/RestartMultiNode 49.45
242 TestMultiNode/serial/ValidateNameConflict 23.78
247 TestPreload 104.28
249 TestScheduledStopUnix 97.76
252 TestInsufficientStorage 12.2
253 TestRunningBinaryUpgrade 45.62
255 TestKubernetesUpgrade 324.85
256 TestMissingContainerUpgrade 103.01
258 TestPause/serial/Start 52.62
260 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
261 TestNoKubernetes/serial/StartWithK8s 30.47
262 TestNoKubernetes/serial/StartWithStopK8s 18.48
263 TestStoppedBinaryUpgrade/Setup 0.58
264 TestStoppedBinaryUpgrade/Upgrade 99.66
265 TestNoKubernetes/serial/Start 8.33
266 TestPause/serial/SecondStartNoReconfiguration 6.29
267 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
268 TestNoKubernetes/serial/VerifyK8sNotRunning 0.31
269 TestNoKubernetes/serial/ProfileList 1.09
270 TestNoKubernetes/serial/Stop 2.17
271 TestPause/serial/Pause 0.82
272 TestPause/serial/VerifyStatus 0.36
273 TestPause/serial/Unpause 0.84
274 TestNoKubernetes/serial/StartNoArgs 6.65
275 TestPause/serial/PauseAgain 0.84
276 TestPause/serial/DeletePaused 3.4
277 TestPause/serial/VerifyDeletedResources 0.49
278 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.34
279 TestStoppedBinaryUpgrade/MinikubeLogs 1.22
287 TestNetworkPlugins/group/false 4.1
298 TestNetworkPlugins/group/auto/Start 42.09
299 TestNetworkPlugins/group/kindnet/Start 42.93
300 TestNetworkPlugins/group/auto/KubeletFlags 0.29
301 TestNetworkPlugins/group/auto/NetCatPod 9.23
302 TestNetworkPlugins/group/auto/DNS 0.13
303 TestNetworkPlugins/group/auto/Localhost 0.11
304 TestNetworkPlugins/group/auto/HairPin 0.11
305 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
306 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
307 TestNetworkPlugins/group/kindnet/NetCatPod 8.23
308 TestNetworkPlugins/group/calico/Start 48.45
309 TestNetworkPlugins/group/kindnet/DNS 0.15
310 TestNetworkPlugins/group/kindnet/Localhost 0.13
311 TestNetworkPlugins/group/kindnet/HairPin 0.13
312 TestNetworkPlugins/group/custom-flannel/Start 55.95
313 TestNetworkPlugins/group/enable-default-cni/Start 43.51
314 TestNetworkPlugins/group/calico/ControllerPod 6.01
315 TestNetworkPlugins/group/calico/KubeletFlags 0.31
316 TestNetworkPlugins/group/calico/NetCatPod 9.21
317 TestNetworkPlugins/group/calico/DNS 0.13
318 TestNetworkPlugins/group/calico/Localhost 0.1
319 TestNetworkPlugins/group/calico/HairPin 0.11
320 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
321 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.25
322 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
323 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
324 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
325 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.33
326 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.22
327 TestNetworkPlugins/group/flannel/Start 56.06
328 TestNetworkPlugins/group/custom-flannel/DNS 0.16
329 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
330 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
331 TestNetworkPlugins/group/bridge/Start 76.42
333 TestStartStop/group/old-k8s-version/serial/FirstStart 53.09
335 TestStartStop/group/no-preload/serial/FirstStart 48.41
336 TestNetworkPlugins/group/flannel/ControllerPod 6.01
337 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
338 TestNetworkPlugins/group/flannel/NetCatPod 9.2
339 TestNetworkPlugins/group/flannel/DNS 0.13
340 TestNetworkPlugins/group/flannel/Localhost 0.11
341 TestNetworkPlugins/group/flannel/HairPin 0.11
342 TestStartStop/group/old-k8s-version/serial/DeployApp 8.3
343 TestNetworkPlugins/group/bridge/KubeletFlags 0.34
344 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.1
345 TestNetworkPlugins/group/bridge/NetCatPod 8.21
346 TestStartStop/group/no-preload/serial/DeployApp 9.32
347 TestStartStop/group/old-k8s-version/serial/Stop 12.18
349 TestStartStop/group/embed-certs/serial/FirstStart 39.93
350 TestNetworkPlugins/group/bridge/DNS 0.13
351 TestNetworkPlugins/group/bridge/Localhost 0.12
352 TestNetworkPlugins/group/bridge/HairPin 0.11
353 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.86
354 TestStartStop/group/no-preload/serial/Stop 12.11
355 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
356 TestStartStop/group/old-k8s-version/serial/SecondStart 49.39
357 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.26
358 TestStartStop/group/no-preload/serial/SecondStart 47.77
360 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 45.38
361 TestStartStop/group/embed-certs/serial/DeployApp 8.28
362 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.93
363 TestStartStop/group/embed-certs/serial/Stop 12.13
364 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
365 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
366 TestStartStop/group/embed-certs/serial/SecondStart 44.72
367 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
368 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
369 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.28
370 TestStartStop/group/old-k8s-version/serial/Pause 3.29
371 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.4
372 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
374 TestStartStop/group/newest-cni/serial/FirstStart 28.93
375 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.3
376 TestStartStop/group/no-preload/serial/Pause 3.57
377 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.21
378 TestStartStop/group/default-k8s-diff-port/serial/Stop 14.41
379 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.26
380 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 49.8
381 TestStartStop/group/newest-cni/serial/DeployApp 0
382 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.01
383 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
384 TestStartStop/group/newest-cni/serial/Stop 1.43
385 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
386 TestStartStop/group/newest-cni/serial/SecondStart 11.15
387 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
388 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.31
389 TestStartStop/group/embed-certs/serial/Pause 3.21
390 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
391 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
392 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
393 TestStartStop/group/newest-cni/serial/Pause 2.97
394 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
395 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
396 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
397 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.72
x
+
TestDownloadOnly/v1.28.0/json-events (4.66s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-400547 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-400547 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (4.663259231s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (4.66s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1115 09:22:25.401300  128258 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1115 09:22:25.403167  128258 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21894-124770/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-400547
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-400547: exit status 85 (73.804367ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-400547 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-400547 │ jenkins │ v1.37.0 │ 15 Nov 25 09:22 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 09:22:20
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 09:22:20.795680  128271 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:22:20.795805  128271 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:22:20.795819  128271 out.go:374] Setting ErrFile to fd 2...
	I1115 09:22:20.795825  128271 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:22:20.796049  128271 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-124770/.minikube/bin
	W1115 09:22:20.796190  128271 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21894-124770/.minikube/config/config.json: open /home/jenkins/minikube-integration/21894-124770/.minikube/config/config.json: no such file or directory
	I1115 09:22:20.796844  128271 out.go:368] Setting JSON to true
	I1115 09:22:20.797848  128271 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":14691,"bootTime":1763183850,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 09:22:20.797948  128271 start.go:143] virtualization: kvm guest
	I1115 09:22:20.800431  128271 out.go:99] [download-only-400547] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1115 09:22:20.800595  128271 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21894-124770/.minikube/cache/preloaded-tarball: no such file or directory
	I1115 09:22:20.800631  128271 notify.go:221] Checking for updates...
	I1115 09:22:20.802305  128271 out.go:171] MINIKUBE_LOCATION=21894
	I1115 09:22:20.803756  128271 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 09:22:20.808352  128271 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21894-124770/kubeconfig
	I1115 09:22:20.809792  128271 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-124770/.minikube
	I1115 09:22:20.811093  128271 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1115 09:22:20.813609  128271 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1115 09:22:20.813881  128271 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 09:22:20.837640  128271 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
	I1115 09:22:20.837710  128271 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:22:21.245596  128271 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:false NGoroutines:66 SystemTime:2025-11-15 09:22:21.235472399 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 09:22:21.245711  128271 docker.go:319] overlay module found
	I1115 09:22:21.247583  128271 out.go:99] Using the docker driver based on user configuration
	I1115 09:22:21.247621  128271 start.go:309] selected driver: docker
	I1115 09:22:21.247629  128271 start.go:930] validating driver "docker" against <nil>
	I1115 09:22:21.247722  128271 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:22:21.310554  128271 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:false NGoroutines:66 SystemTime:2025-11-15 09:22:21.300815087 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 09:22:21.310711  128271 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1115 09:22:21.311307  128271 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1115 09:22:21.311469  128271 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1115 09:22:21.313949  128271 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-400547 host does not exist
	  To start a cluster, run: "minikube start -p download-only-400547"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-400547
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (3.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-965448 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-965448 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (3.296743505s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (3.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1115 09:22:29.146137  128258 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
I1115 09:22:29.146191  128258 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21894-124770/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-965448
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-965448: exit status 85 (74.910516ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-400547 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-400547 │ jenkins │ v1.37.0 │ 15 Nov 25 09:22 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 15 Nov 25 09:22 UTC │ 15 Nov 25 09:22 UTC │
	│ delete  │ -p download-only-400547                                                                                                                                                               │ download-only-400547 │ jenkins │ v1.37.0 │ 15 Nov 25 09:22 UTC │ 15 Nov 25 09:22 UTC │
	│ start   │ -o=json --download-only -p download-only-965448 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-965448 │ jenkins │ v1.37.0 │ 15 Nov 25 09:22 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/15 09:22:25
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1115 09:22:25.901387  128629 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:22:25.901522  128629 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:22:25.901534  128629 out.go:374] Setting ErrFile to fd 2...
	I1115 09:22:25.901540  128629 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:22:25.901755  128629 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-124770/.minikube/bin
	I1115 09:22:25.902265  128629 out.go:368] Setting JSON to true
	I1115 09:22:25.903107  128629 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":14696,"bootTime":1763183850,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 09:22:25.903200  128629 start.go:143] virtualization: kvm guest
	I1115 09:22:25.905116  128629 out.go:99] [download-only-965448] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 09:22:25.905275  128629 notify.go:221] Checking for updates...
	I1115 09:22:25.906698  128629 out.go:171] MINIKUBE_LOCATION=21894
	I1115 09:22:25.908026  128629 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 09:22:25.909182  128629 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21894-124770/kubeconfig
	I1115 09:22:25.910284  128629 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-124770/.minikube
	I1115 09:22:25.911414  128629 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1115 09:22:25.913607  128629 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1115 09:22:25.913849  128629 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 09:22:25.936400  128629 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
	I1115 09:22:25.936471  128629 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:22:25.991767  128629 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:false NGoroutines:54 SystemTime:2025-11-15 09:22:25.981196458 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 09:22:25.991906  128629 docker.go:319] overlay module found
	I1115 09:22:25.993591  128629 out.go:99] Using the docker driver based on user configuration
	I1115 09:22:25.993633  128629 start.go:309] selected driver: docker
	I1115 09:22:25.993642  128629 start.go:930] validating driver "docker" against <nil>
	I1115 09:22:25.993728  128629 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:22:26.052895  128629 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:false NGoroutines:54 SystemTime:2025-11-15 09:22:26.042513221 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 09:22:26.053070  128629 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1115 09:22:26.053569  128629 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1115 09:22:26.053718  128629 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1115 09:22:26.055555  128629 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-965448 host does not exist
	  To start a cluster, run: "minikube start -p download-only-965448"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-965448
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.41s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-015089 --alsologtostderr --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "download-docker-015089" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-015089
--- PASS: TestDownloadOnlyKic (0.41s)

                                                
                                    
x
+
TestBinaryMirror (0.81s)

                                                
                                                
=== RUN   TestBinaryMirror
I1115 09:22:30.283556  128258 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-946072 --alsologtostderr --binary-mirror http://127.0.0.1:33175 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-946072" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-946072
--- PASS: TestBinaryMirror (0.81s)

                                                
                                    
x
+
TestOffline (52.85s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-312426 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-312426 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (50.229691741s)
helpers_test.go:175: Cleaning up "offline-containerd-312426" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-312426
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-312426: (2.619025617s)
--- PASS: TestOffline (52.85s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-868580
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-868580: exit status 85 (67.202535ms)

                                                
                                                
-- stdout --
	* Profile "addons-868580" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-868580"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-868580
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-868580: exit status 85 (67.900975ms)

                                                
                                                
-- stdout --
	* Profile "addons-868580" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-868580"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (102.55s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-868580 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-868580 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (1m42.545457752s)
--- PASS: TestAddons/Setup (102.55s)

                                                
                                    
x
+
TestAddons/serial/Volcano (38.22s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:884: volcano-controller stabilized in 13.844394ms
addons_test.go:876: volcano-admission stabilized in 13.872759ms
addons_test.go:868: volcano-scheduler stabilized in 13.898898ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-cpk99" [2d1a5dfa-fe45-424c-a401-c050171121d1] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003307285s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-6c447bd768-st4bv" [ce1c26f7-4fdc-4a47-bcbb-1e6fc98fe1c3] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003863214s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-6fd4f85cb8-29d7w" [a4b325ba-f77b-411e-95fa-d8e90b561aaf] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004017s
addons_test.go:903: (dbg) Run:  kubectl --context addons-868580 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-868580 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-868580 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [ffbf9079-2f0b-4731-a0cd-4a2aa0f922e1] Pending
helpers_test.go:352: "test-job-nginx-0" [ffbf9079-2f0b-4731-a0cd-4a2aa0f922e1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [ffbf9079-2f0b-4731-a0cd-4a2aa0f922e1] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 11.003897776s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-868580 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-868580 addons disable volcano --alsologtostderr -v=1: (11.825494944s)
--- PASS: TestAddons/serial/Volcano (38.22s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-868580 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-868580 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.46s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-868580 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-868580 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [2af491ad-6d6f-4542-a74e-0839e83223e7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [2af491ad-6d6f-4542-a74e-0839e83223e7] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.003992472s
addons_test.go:694: (dbg) Run:  kubectl --context addons-868580 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-868580 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-868580 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.46s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.41s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 25.075252ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-vpxxs" [19d45cf9-f66d-432c-857f-172d8d0e3b92] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003959308s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-v9wb4" [3ec809ed-df3f-425a-b588-dbcdbb11ab4e] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003757367s
addons_test.go:392: (dbg) Run:  kubectl --context addons-868580 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-868580 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-868580 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.532109976s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-868580 ip
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-868580 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.41s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.7s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 2.95034ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-868580
addons_test.go:332: (dbg) Run:  kubectl --context addons-868580 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-868580 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.70s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (17.96s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-868580 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-868580 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-868580 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [23d509ca-485a-4615-a80c-f2e9e7fbbf51] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [23d509ca-485a-4615-a80c-f2e9e7fbbf51] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.003709432s
I1115 09:25:32.333884  128258 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-868580 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-868580 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-868580 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-868580 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-868580 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-868580 addons disable ingress --alsologtostderr -v=1: (7.748249492s)
--- PASS: TestAddons/parallel/Ingress (17.96s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.65s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-992lg" [edfbf4d3-9e19-4ec3-8e5e-627a9d5db660] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003257011s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-868580 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-868580 addons disable inspektor-gadget --alsologtostderr -v=1: (5.650270094s)
--- PASS: TestAddons/parallel/InspektorGadget (11.65s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.68s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.242838ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-vw28z" [bd51c082-f8b6-4948-a6f1-f94a8102da84] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004119568s
addons_test.go:463: (dbg) Run:  kubectl --context addons-868580 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-868580 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.68s)

                                                
                                    
x
+
TestAddons/parallel/CSI (44.65s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1115 09:25:14.963006  128258 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1115 09:25:14.966328  128258 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1115 09:25:14.966357  128258 kapi.go:107] duration metric: took 3.383366ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.398619ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-868580 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-868580 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-868580 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-868580 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-868580 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-868580 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-868580 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-868580 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-868580 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-868580 get pvc hpvc -o jsonpath={.status.phase} -n default
2025/11/15 09:25:23 [DEBUG] GET http://192.168.49.2:5000
helpers_test.go:402: (dbg) Run:  kubectl --context addons-868580 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-868580 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [1e27e93f-588c-4d56-8e19-8282f7b3c465] Pending
helpers_test.go:352: "task-pv-pod" [1e27e93f-588c-4d56-8e19-8282f7b3c465] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [1e27e93f-588c-4d56-8e19-8282f7b3c465] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.00399286s
addons_test.go:572: (dbg) Run:  kubectl --context addons-868580 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-868580 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-868580 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-868580 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-868580 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-868580 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-868580 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-868580 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-868580 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-868580 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-868580 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-868580 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-868580 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-868580 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-868580 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-868580 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-868580 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-868580 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-868580 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [317dbd44-3412-4851-967c-d5be53497bf8] Pending
helpers_test.go:352: "task-pv-pod-restore" [317dbd44-3412-4851-967c-d5be53497bf8] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [317dbd44-3412-4851-967c-d5be53497bf8] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003740958s
addons_test.go:614: (dbg) Run:  kubectl --context addons-868580 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-868580 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-868580 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-868580 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-868580 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-868580 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.544392723s)
--- PASS: TestAddons/parallel/CSI (44.65s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.62s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-868580 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6945c6f4d-qk2lf" [815d51e8-5f56-495d-af8a-46c16cf7ce03] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-qk2lf" [815d51e8-5f56-495d-af8a-46c16cf7ce03] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003886024s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-868580 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-868580 addons disable headlamp --alsologtostderr -v=1: (5.767477247s)
--- PASS: TestAddons/parallel/Headlamp (16.62s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.51s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-6f9fcf858b-vcb6x" [40f6fe41-59e6-44c6-96a8-650a5f3309b7] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004255292s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-868580 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.51s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.19s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-868580 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-868580 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-868580 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-868580 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-868580 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-868580 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-868580 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [4222f737-7233-4d88-bc19-7f9e045aebc9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [4222f737-7233-4d88-bc19-7f9e045aebc9] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [4222f737-7233-4d88-bc19-7f9e045aebc9] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003109841s
addons_test.go:967: (dbg) Run:  kubectl --context addons-868580 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-868580 ssh "cat /opt/local-path-provisioner/pvc-242e24ae-d5a7-4dad-9b52-9c4a0367e91c_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-868580 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-868580 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-868580 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (8.19s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.52s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-xlgnt" [e3d23a9e-a1f0-4962-8e71-c5ca5112ab74] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003651382s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-868580 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.52s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.67s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-ggxsz" [43d2186e-a947-497a-8941-07531d00b990] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003750963s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-868580 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-868580 addons disable yakd --alsologtostderr -v=1: (5.661222038s)
--- PASS: TestAddons/parallel/Yakd (10.67s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.49s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-lgm4x" [c09a8af8-b724-42fc-9660-05bb6c7d2dc1] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003953823s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-868580 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (5.49s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.7s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-868580
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-868580: (12.394024962s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-868580
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-868580
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-868580
--- PASS: TestAddons/StoppedEnableDisable (12.70s)

                                                
                                    
x
+
TestCertOptions (26.16s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-822092 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-822092 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (23.441851464s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-822092 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-822092 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-822092 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-822092" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-822092
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-822092: (2.005898793s)
--- PASS: TestCertOptions (26.16s)

                                                
                                    
x
+
TestCertExpiration (217.06s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-504206 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-504206 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (28.165412837s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-504206 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
E1115 10:04:13.452109  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/functional-643455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:04:13.714432  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/addons-868580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-504206 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (6.382664996s)
helpers_test.go:175: Cleaning up "cert-expiration-504206" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-504206
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-504206: (2.506924809s)
--- PASS: TestCertExpiration (217.06s)

                                                
                                    
x
+
TestForceSystemdFlag (24.54s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-826563 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-826563 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (21.790275467s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-826563 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-826563" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-826563
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-826563: (2.438916859s)
--- PASS: TestForceSystemdFlag (24.54s)

                                                
                                    
x
+
TestForceSystemdEnv (35.91s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-373059 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-373059 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (33.023323692s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-373059 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-373059" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-373059
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-373059: (2.58660347s)
--- PASS: TestForceSystemdEnv (35.91s)

                                                
                                    
x
+
TestDockerEnvContainerd (35s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux amd64
docker_test.go:181: (dbg) Run:  out/minikube-linux-amd64 start -p dockerenv-642088 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-amd64 start -p dockerenv-642088 --driver=docker  --container-runtime=containerd: (20.171834376s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-amd64 docker-env --ssh-host --ssh-add -p dockerenv-642088"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXCWavOM/agent.152019" SSH_AGENT_PID="152020" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXCWavOM/agent.152019" SSH_AGENT_PID="152020" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXCWavOM/agent.152019" SSH_AGENT_PID="152020" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-642088" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p dockerenv-642088
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p dockerenv-642088: (1.918093968s)
--- PASS: TestDockerEnvContainerd (35.00s)

                                                
                                    
x
+
TestErrorSpam/setup (19.42s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-005494 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-005494 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-005494 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-005494 --driver=docker  --container-runtime=containerd: (19.42417824s)
--- PASS: TestErrorSpam/setup (19.42s)

                                                
                                    
x
+
TestErrorSpam/start (0.67s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-005494 --log_dir /tmp/nospam-005494 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-005494 --log_dir /tmp/nospam-005494 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-005494 --log_dir /tmp/nospam-005494 start --dry-run
--- PASS: TestErrorSpam/start (0.67s)

                                                
                                    
x
+
TestErrorSpam/status (0.95s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-005494 --log_dir /tmp/nospam-005494 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-005494 --log_dir /tmp/nospam-005494 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-005494 --log_dir /tmp/nospam-005494 status
--- PASS: TestErrorSpam/status (0.95s)

                                                
                                    
x
+
TestErrorSpam/pause (1.44s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-005494 --log_dir /tmp/nospam-005494 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-005494 --log_dir /tmp/nospam-005494 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-005494 --log_dir /tmp/nospam-005494 pause
--- PASS: TestErrorSpam/pause (1.44s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.54s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-005494 --log_dir /tmp/nospam-005494 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-005494 --log_dir /tmp/nospam-005494 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-005494 --log_dir /tmp/nospam-005494 unpause
--- PASS: TestErrorSpam/unpause (1.54s)

                                                
                                    
x
+
TestErrorSpam/stop (12.07s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-005494 --log_dir /tmp/nospam-005494 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-005494 --log_dir /tmp/nospam-005494 stop: (11.861129878s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-005494 --log_dir /tmp/nospam-005494 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-005494 --log_dir /tmp/nospam-005494 stop
--- PASS: TestErrorSpam/stop (12.07s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21894-124770/.minikube/files/etc/test/nested/copy/128258/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (40.17s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-643455 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-643455 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (40.166074501s)
--- PASS: TestFunctional/serial/StartWithProxy (40.17s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.66s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1115 09:28:13.391817  128258 config.go:182] Loaded profile config "functional-643455": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-643455 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-643455 --alsologtostderr -v=8: (6.659983872s)
functional_test.go:678: soft start took 6.660812531s for "functional-643455" cluster.
I1115 09:28:20.052301  128258 config.go:182] Loaded profile config "functional-643455": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (6.66s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-643455 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.52s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.52s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.87s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-643455 /tmp/TestFunctionalserialCacheCmdcacheadd_local1623596204/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 cache add minikube-local-cache-test:functional-643455
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 cache delete minikube-local-cache-test:functional-643455
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-643455
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.87s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-643455 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (279.498829ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.56s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 kubectl -- --context functional-643455 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-643455 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.27s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-643455 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-643455 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.273778957s)
functional_test.go:776: restart took 40.273922737s for "functional-643455" cluster.
I1115 09:29:06.221435  128258 config.go:182] Loaded profile config "functional-643455": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (40.27s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-643455 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.23s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-643455 logs: (1.227208516s)
--- PASS: TestFunctional/serial/LogsCmd (1.23s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.23s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 logs --file /tmp/TestFunctionalserialLogsFileCmd3198611802/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-643455 logs --file /tmp/TestFunctionalserialLogsFileCmd3198611802/001/logs.txt: (1.229909408s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.23s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.5s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-643455 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-643455
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-643455: exit status 115 (547.947427ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30328 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-643455 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.50s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-643455 config get cpus: exit status 14 (101.681929ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 config get cpus
E1115 09:29:13.714461  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/addons-868580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:29:13.722325  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/addons-868580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-643455 config get cpus: exit status 14 (84.847205ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-643455 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-643455 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (170.338387ms)

                                                
                                                
-- stdout --
	* [functional-643455] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21894
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21894-124770/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-124770/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:29:44.476808  174762 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:29:44.477142  174762 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:29:44.477155  174762 out.go:374] Setting ErrFile to fd 2...
	I1115 09:29:44.477179  174762 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:29:44.477408  174762 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-124770/.minikube/bin
	I1115 09:29:44.477857  174762 out.go:368] Setting JSON to false
	I1115 09:29:44.478926  174762 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":15134,"bootTime":1763183850,"procs":245,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 09:29:44.479030  174762 start.go:143] virtualization: kvm guest
	I1115 09:29:44.481308  174762 out.go:179] * [functional-643455] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 09:29:44.482756  174762 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 09:29:44.482768  174762 notify.go:221] Checking for updates...
	I1115 09:29:44.485287  174762 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 09:29:44.486581  174762 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-124770/kubeconfig
	I1115 09:29:44.487829  174762 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-124770/.minikube
	I1115 09:29:44.489126  174762 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 09:29:44.490383  174762 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 09:29:44.492268  174762 config.go:182] Loaded profile config "functional-643455": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1115 09:29:44.492742  174762 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 09:29:44.516322  174762 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
	I1115 09:29:44.516426  174762 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:29:44.578563  174762 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-15 09:29:44.568380106 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 09:29:44.578670  174762 docker.go:319] overlay module found
	I1115 09:29:44.580912  174762 out.go:179] * Using the docker driver based on existing profile
	I1115 09:29:44.582541  174762 start.go:309] selected driver: docker
	I1115 09:29:44.582561  174762 start.go:930] validating driver "docker" against &{Name:functional-643455 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-643455 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:29:44.582665  174762 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 09:29:44.584648  174762 out.go:203] 
	W1115 09:29:44.585988  174762 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1115 09:29:44.587234  174762 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-643455 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-643455 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-643455 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (170.003246ms)

                                                
                                                
-- stdout --
	* [functional-643455] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21894
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21894-124770/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-124770/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:29:44.875838  174988 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:29:44.875936  174988 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:29:44.875944  174988 out.go:374] Setting ErrFile to fd 2...
	I1115 09:29:44.875957  174988 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:29:44.876325  174988 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-124770/.minikube/bin
	I1115 09:29:44.876748  174988 out.go:368] Setting JSON to false
	I1115 09:29:44.877812  174988 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":15135,"bootTime":1763183850,"procs":245,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 09:29:44.877921  174988 start.go:143] virtualization: kvm guest
	I1115 09:29:44.880159  174988 out.go:179] * [functional-643455] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1115 09:29:44.881646  174988 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 09:29:44.881678  174988 notify.go:221] Checking for updates...
	I1115 09:29:44.884009  174988 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 09:29:44.885173  174988 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-124770/kubeconfig
	I1115 09:29:44.886339  174988 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-124770/.minikube
	I1115 09:29:44.887594  174988 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 09:29:44.888818  174988 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 09:29:44.890443  174988 config.go:182] Loaded profile config "functional-643455": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1115 09:29:44.890911  174988 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 09:29:44.915414  174988 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
	I1115 09:29:44.915506  174988 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:29:44.974874  174988 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-15 09:29:44.965206839 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 09:29:44.974982  174988 docker.go:319] overlay module found
	I1115 09:29:44.976788  174988 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1115 09:29:44.978139  174988 start.go:309] selected driver: docker
	I1115 09:29:44.978155  174988 start.go:930] validating driver "docker" against &{Name:functional-643455 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-643455 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1115 09:29:44.978254  174988 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 09:29:44.980009  174988 out.go:203] 
	W1115 09:29:44.981297  174988 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1115 09:29:44.982533  174988 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-643455 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-643455 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-q2qtv" [a2dd1e39-ea06-4eb8-8daf-708757ff9587] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-q2qtv" [a2dd1e39-ea06-4eb8-8daf-708757ff9587] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.040265287s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:30392
functional_test.go:1680: http://192.168.49.2:30392: success! body:
Request served by hello-node-connect-7d85dfc575-q2qtv

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:30392
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.66s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 ssh -n functional-643455 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 cp functional-643455:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4275881094/001/cp-test.txt
E1115 09:29:16.286724  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/addons-868580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 ssh -n functional-643455 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 ssh -n functional-643455 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.92s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (17.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-643455 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-5bd4x" [8dd16d91-1cf2-4c7a-8183-1eff35929b58] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-5bd4x" [8dd16d91-1cf2-4c7a-8183-1eff35929b58] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 14.003573312s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-643455 exec mysql-5bb876957f-5bd4x -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-643455 exec mysql-5bb876957f-5bd4x -- mysql -ppassword -e "show databases;": exit status 1 (132.470348ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1115 09:29:28.955855  128258 retry.go:31] will retry after 1.296039914s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-643455 exec mysql-5bb876957f-5bd4x -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-643455 exec mysql-5bb876957f-5bd4x -- mysql -ppassword -e "show databases;": exit status 1 (109.328076ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1115 09:29:30.362471  128258 retry.go:31] will retry after 2.09576954s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-643455 exec mysql-5bb876957f-5bd4x -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (17.94s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/128258/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 ssh "sudo cat /etc/test/nested/copy/128258/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/128258.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 ssh "sudo cat /etc/ssl/certs/128258.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/128258.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 ssh "sudo cat /usr/share/ca-certificates/128258.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/1282582.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 ssh "sudo cat /etc/ssl/certs/1282582.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/1282582.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 ssh "sudo cat /usr/share/ca-certificates/1282582.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
E1115 09:29:15.005218  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/addons-868580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/CertSync (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-643455 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-643455 ssh "sudo systemctl is-active docker": exit status 1 (310.715903ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 ssh "sudo systemctl is-active crio"
E1115 09:29:14.363657  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/addons-868580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-643455 ssh "sudo systemctl is-active crio": exit status 1 (328.623776ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
E1115 09:29:13.733958  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/addons-868580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:29:13.755520  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/addons-868580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:29:13.797287  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/addons-868580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:29:13.878931  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/addons-868580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/License (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-643455 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-643455
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-643455
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-643455 image ls --format short --alsologtostderr:
I1115 09:34:48.303429  178531 out.go:360] Setting OutFile to fd 1 ...
I1115 09:34:48.303692  178531 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:34:48.303703  178531 out.go:374] Setting ErrFile to fd 2...
I1115 09:34:48.303710  178531 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:34:48.303941  178531 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-124770/.minikube/bin
I1115 09:34:48.304665  178531 config.go:182] Loaded profile config "functional-643455": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1115 09:34:48.304765  178531 config.go:182] Loaded profile config "functional-643455": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1115 09:34:48.305215  178531 cli_runner.go:164] Run: docker container inspect functional-643455 --format={{.State.Status}}
I1115 09:34:48.323381  178531 ssh_runner.go:195] Run: systemctl --version
I1115 09:34:48.323428  178531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-643455
I1115 09:34:48.341469  178531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21894-124770/.minikube/machines/functional-643455/id_rsa Username:docker}
I1115 09:34:48.434988  178531 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-643455 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/library/minikube-local-cache-test │ functional-643455  │ sha256:1ec8a8 │ 991B   │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:56cc51 │ 2.4MB  │
│ registry.k8s.io/pause                       │ latest             │ sha256:350b16 │ 72.3kB │
│ docker.io/kicbase/echo-server               │ functional-643455  │ sha256:9056ab │ 2.37MB │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:409467 │ 44.4MB │
│ localhost/my-image                          │ functional-643455  │ sha256:93823a │ 775kB  │
│ registry.k8s.io/etcd                        │ 3.6.4-0            │ sha256:5f1f52 │ 74.3MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.1            │ sha256:c80c8d │ 22.8MB │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:0184c1 │ 298kB  │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:52546a │ 22.4MB │
│ registry.k8s.io/kube-apiserver              │ v1.34.1            │ sha256:c3994b │ 27.1MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.1            │ sha256:7dd6aa │ 17.4MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:da86e6 │ 315kB  │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:cd073f │ 320kB  │
│ docker.io/library/mysql                     │ 5.7                │ sha256:510733 │ 138MB  │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:6e38f4 │ 9.06MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.1            │ sha256:fc2517 │ 26MB   │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-643455 image ls --format table --alsologtostderr:
I1115 09:34:51.692007  179024 out.go:360] Setting OutFile to fd 1 ...
I1115 09:34:51.692293  179024 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:34:51.692304  179024 out.go:374] Setting ErrFile to fd 2...
I1115 09:34:51.692308  179024 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:34:51.692511  179024 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-124770/.minikube/bin
I1115 09:34:51.693082  179024 config.go:182] Loaded profile config "functional-643455": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1115 09:34:51.693172  179024 config.go:182] Loaded profile config "functional-643455": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1115 09:34:51.693568  179024 cli_runner.go:164] Run: docker container inspect functional-643455 --format={{.State.Status}}
I1115 09:34:51.711820  179024 ssh_runner.go:195] Run: systemctl --version
I1115 09:34:51.711869  179024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-643455
I1115 09:34:51.729632  179024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21894-124770/.minikube/machines/functional-643455/id_rsa Username:docker}
I1115 09:34:51.823027  179024 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-643455 image ls --format json --alsologtostderr:
[{"id":"sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"44375501"},{"id":"sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb"],"repoTags":["docker.io/library/mysql:5.7"],"size":"137909886"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:1ec8a8233e5e9476b7aa9143d093c4dddb7688296c8cd5cb578042eac8b63771","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-643455"],"size":"991"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6c
eeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:93823aa38741e4cee9447ab46ca7d8340b95896052224069b50acb58a8bed831","repoDigests":[],"repoTags":["localhost/my-image:functional-643455"],"size":"774888"},{"id":"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"22384805"},{"id":"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"27061991"},{"id":"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5
d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"17385568"},{"id":"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"320448"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"74311308"},{"id":"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manag
er:v1.34.1"],"size":"22820214"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"25963718"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-643455"],"size":"2372971"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-643455 image ls --format json --alsologtostderr:
I1115 09:34:51.472157  178970 out.go:360] Setting OutFile to fd 1 ...
I1115 09:34:51.472261  178970 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:34:51.472268  178970 out.go:374] Setting ErrFile to fd 2...
I1115 09:34:51.472272  178970 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:34:51.472445  178970 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-124770/.minikube/bin
I1115 09:34:51.473045  178970 config.go:182] Loaded profile config "functional-643455": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1115 09:34:51.473146  178970 config.go:182] Loaded profile config "functional-643455": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1115 09:34:51.473528  178970 cli_runner.go:164] Run: docker container inspect functional-643455 --format={{.State.Status}}
I1115 09:34:51.492538  178970 ssh_runner.go:195] Run: systemctl --version
I1115 09:34:51.492615  178970 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-643455
I1115 09:34:51.510329  178970 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21894-124770/.minikube/machines/functional-643455/id_rsa Username:docker}
I1115 09:34:51.603036  178970 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-643455 image ls --format yaml --alsologtostderr:
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-643455
size: "2372971"
- id: sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "27061991"
- id: sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "320448"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "44375501"
- id: sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "22384805"
- id: sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "22820214"
- id: sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "25963718"
- id: sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "17385568"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
repoTags:
- docker.io/library/mysql:5.7
size: "137909886"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:1ec8a8233e5e9476b7aa9143d093c4dddb7688296c8cd5cb578042eac8b63771
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-643455
size: "991"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "74311308"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-643455 image ls --format yaml --alsologtostderr:
I1115 09:34:48.522773  178583 out.go:360] Setting OutFile to fd 1 ...
I1115 09:34:48.523013  178583 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:34:48.523022  178583 out.go:374] Setting ErrFile to fd 2...
I1115 09:34:48.523026  178583 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:34:48.523232  178583 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-124770/.minikube/bin
I1115 09:34:48.523766  178583 config.go:182] Loaded profile config "functional-643455": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1115 09:34:48.523861  178583 config.go:182] Loaded profile config "functional-643455": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1115 09:34:48.524256  178583 cli_runner.go:164] Run: docker container inspect functional-643455 --format={{.State.Status}}
I1115 09:34:48.542232  178583 ssh_runner.go:195] Run: systemctl --version
I1115 09:34:48.542282  178583 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-643455
I1115 09:34:48.560312  178583 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21894-124770/.minikube/machines/functional-643455/id_rsa Username:docker}
I1115 09:34:48.653160  178583 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-643455 ssh pgrep buildkitd: exit status 1 (272.703571ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 image build -t localhost/my-image:functional-643455 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-643455 image build -t localhost/my-image:functional-643455 testdata/build --alsologtostderr: (2.227580674s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-643455 image build -t localhost/my-image:functional-643455 testdata/build --alsologtostderr:
I1115 09:34:49.021218  178743 out.go:360] Setting OutFile to fd 1 ...
I1115 09:34:49.021527  178743 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:34:49.021538  178743 out.go:374] Setting ErrFile to fd 2...
I1115 09:34:49.021543  178743 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1115 09:34:49.021892  178743 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-124770/.minikube/bin
I1115 09:34:49.022617  178743 config.go:182] Loaded profile config "functional-643455": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1115 09:34:49.023423  178743 config.go:182] Loaded profile config "functional-643455": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1115 09:34:49.023869  178743 cli_runner.go:164] Run: docker container inspect functional-643455 --format={{.State.Status}}
I1115 09:34:49.043115  178743 ssh_runner.go:195] Run: systemctl --version
I1115 09:34:49.043177  178743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-643455
I1115 09:34:49.061347  178743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21894-124770/.minikube/machines/functional-643455/id_rsa Username:docker}
I1115 09:34:49.154936  178743 build_images.go:162] Building image from path: /tmp/build.1207801677.tar
I1115 09:34:49.155047  178743 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1115 09:34:49.163359  178743 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1207801677.tar
I1115 09:34:49.167669  178743 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1207801677.tar: stat -c "%s %y" /var/lib/minikube/build/build.1207801677.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1207801677.tar': No such file or directory
I1115 09:34:49.167723  178743 ssh_runner.go:362] scp /tmp/build.1207801677.tar --> /var/lib/minikube/build/build.1207801677.tar (3072 bytes)
I1115 09:34:49.186869  178743 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1207801677
I1115 09:34:49.194705  178743 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1207801677 -xf /var/lib/minikube/build/build.1207801677.tar
I1115 09:34:49.202845  178743 containerd.go:394] Building image: /var/lib/minikube/build/build.1207801677
I1115 09:34:49.202913  178743 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1207801677 --local dockerfile=/var/lib/minikube/build/build.1207801677 --output type=image,name=localhost/my-image:functional-643455
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.2s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:1b582423df31bc82b4e274f205ee8f7df630e3e05f1f1be1d5269a4ed7fe5c5f done
#8 exporting config sha256:93823aa38741e4cee9447ab46ca7d8340b95896052224069b50acb58a8bed831
#8 exporting config sha256:93823aa38741e4cee9447ab46ca7d8340b95896052224069b50acb58a8bed831 done
#8 naming to localhost/my-image:functional-643455 done
#8 DONE 0.1s
I1115 09:34:51.163799  178743 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1207801677 --local dockerfile=/var/lib/minikube/build/build.1207801677 --output type=image,name=localhost/my-image:functional-643455: (1.960834622s)
I1115 09:34:51.163875  178743 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1207801677
I1115 09:34:51.173010  178743 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1207801677.tar
I1115 09:34:51.181648  178743 build_images.go:218] Built localhost/my-image:functional-643455 from /tmp/build.1207801677.tar
I1115 09:34:51.181694  178743 build_images.go:134] succeeded building to: functional-643455
I1115 09:34:51.181701  178743 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
E1115 09:29:14.040333  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/addons-868580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-643455
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 image load --daemon kicbase/echo-server:functional-643455 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 image load --daemon kicbase/echo-server:functional-643455 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-643455
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 image load --daemon kicbase/echo-server:functional-643455 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 image save kicbase/echo-server:functional-643455 /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 image rm kicbase/echo-server:functional-643455 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 image ls
E1115 09:29:18.848163  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/addons-868580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-643455
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 image save --daemon kicbase/echo-server:functional-643455 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-643455
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-643455 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-643455 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-643455 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 170595: os: process already finished
helpers_test.go:519: unable to terminate pid 170392: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-643455 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-643455 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "334.840761ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "61.527843ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "333.689088ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "60.430819ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-643455 /tmp/TestFunctionalparallelMountCmdany-port532044957/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1763198973779415877" to /tmp/TestFunctionalparallelMountCmdany-port532044957/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1763198973779415877" to /tmp/TestFunctionalparallelMountCmdany-port532044957/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1763198973779415877" to /tmp/TestFunctionalparallelMountCmdany-port532044957/001/test-1763198973779415877
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-643455 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (287.419174ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1115 09:29:34.067138  128258 retry.go:31] will retry after 604.527787ms: exit status 1
E1115 09:29:34.212704  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/addons-868580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 15 09:29 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 15 09:29 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 15 09:29 test-1763198973779415877
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 ssh cat /mount-9p/test-1763198973779415877
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-643455 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [d55dcbf2-28e3-42b4-afad-67f9b04053b8] Pending
helpers_test.go:352: "busybox-mount" [d55dcbf2-28e3-42b4-afad-67f9b04053b8] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [d55dcbf2-28e3-42b4-afad-67f9b04053b8] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [d55dcbf2-28e3-42b4-afad-67f9b04053b8] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003061339s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-643455 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-643455 /tmp/TestFunctionalparallelMountCmdany-port532044957/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.85s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-643455 /tmp/TestFunctionalparallelMountCmdspecific-port3528019422/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-643455 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (285.528579ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1115 09:29:40.912299  128258 retry.go:31] will retry after 566.115892ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-643455 /tmp/TestFunctionalparallelMountCmdspecific-port3528019422/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-643455 ssh "sudo umount -f /mount-9p": exit status 1 (272.661593ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-643455 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-643455 /tmp/TestFunctionalparallelMountCmdspecific-port3528019422/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.88s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-643455 /tmp/TestFunctionalparallelMountCmdVerifyCleanup751928891/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-643455 /tmp/TestFunctionalparallelMountCmdVerifyCleanup751928891/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-643455 /tmp/TestFunctionalparallelMountCmdVerifyCleanup751928891/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-643455 ssh "findmnt -T" /mount1: exit status 1 (351.762637ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1115 09:29:42.863447  128258 retry.go:31] will retry after 528.792231ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-643455 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-643455 /tmp/TestFunctionalparallelMountCmdVerifyCleanup751928891/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-643455 /tmp/TestFunctionalparallelMountCmdVerifyCleanup751928891/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-643455 /tmp/TestFunctionalparallelMountCmdVerifyCleanup751928891/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-643455 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-643455 service list: (1.709316137s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-643455 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-643455 service list -o json: (1.701852447s)
functional_test.go:1504: Took "1.701951445s" to run "out/minikube-linux-amd64 -p functional-643455 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.70s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-643455
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-643455
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-643455
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (116.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-823266 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (1m56.006983466s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (116.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-823266 kubectl -- rollout status deployment/busybox: (2.770252108s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 kubectl -- exec busybox-7b57f96db7-cvzpn -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 kubectl -- exec busybox-7b57f96db7-qf92x -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 kubectl -- exec busybox-7b57f96db7-rz2bq -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 kubectl -- exec busybox-7b57f96db7-cvzpn -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 kubectl -- exec busybox-7b57f96db7-qf92x -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 kubectl -- exec busybox-7b57f96db7-rz2bq -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 kubectl -- exec busybox-7b57f96db7-cvzpn -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 kubectl -- exec busybox-7b57f96db7-qf92x -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 kubectl -- exec busybox-7b57f96db7-rz2bq -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 kubectl -- exec busybox-7b57f96db7-cvzpn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 kubectl -- exec busybox-7b57f96db7-cvzpn -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 kubectl -- exec busybox-7b57f96db7-qf92x -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 kubectl -- exec busybox-7b57f96db7-qf92x -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 kubectl -- exec busybox-7b57f96db7-rz2bq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 kubectl -- exec busybox-7b57f96db7-rz2bq -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (23.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-823266 node add --alsologtostderr -v 5: (23.066502519s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (23.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-823266 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (17.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 cp testdata/cp-test.txt ha-823266:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 ssh -n ha-823266 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 cp ha-823266:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2680078631/001/cp-test_ha-823266.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 ssh -n ha-823266 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 cp ha-823266:/home/docker/cp-test.txt ha-823266-m02:/home/docker/cp-test_ha-823266_ha-823266-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 ssh -n ha-823266 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 ssh -n ha-823266-m02 "sudo cat /home/docker/cp-test_ha-823266_ha-823266-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 cp ha-823266:/home/docker/cp-test.txt ha-823266-m03:/home/docker/cp-test_ha-823266_ha-823266-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 ssh -n ha-823266 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 ssh -n ha-823266-m03 "sudo cat /home/docker/cp-test_ha-823266_ha-823266-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 cp ha-823266:/home/docker/cp-test.txt ha-823266-m04:/home/docker/cp-test_ha-823266_ha-823266-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 ssh -n ha-823266 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 ssh -n ha-823266-m04 "sudo cat /home/docker/cp-test_ha-823266_ha-823266-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 cp testdata/cp-test.txt ha-823266-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 ssh -n ha-823266-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 cp ha-823266-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2680078631/001/cp-test_ha-823266-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 ssh -n ha-823266-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 cp ha-823266-m02:/home/docker/cp-test.txt ha-823266:/home/docker/cp-test_ha-823266-m02_ha-823266.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 ssh -n ha-823266-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 ssh -n ha-823266 "sudo cat /home/docker/cp-test_ha-823266-m02_ha-823266.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 cp ha-823266-m02:/home/docker/cp-test.txt ha-823266-m03:/home/docker/cp-test_ha-823266-m02_ha-823266-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 ssh -n ha-823266-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 ssh -n ha-823266-m03 "sudo cat /home/docker/cp-test_ha-823266-m02_ha-823266-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 cp ha-823266-m02:/home/docker/cp-test.txt ha-823266-m04:/home/docker/cp-test_ha-823266-m02_ha-823266-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 ssh -n ha-823266-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 ssh -n ha-823266-m04 "sudo cat /home/docker/cp-test_ha-823266-m02_ha-823266-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 cp testdata/cp-test.txt ha-823266-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 ssh -n ha-823266-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 cp ha-823266-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2680078631/001/cp-test_ha-823266-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 ssh -n ha-823266-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 cp ha-823266-m03:/home/docker/cp-test.txt ha-823266:/home/docker/cp-test_ha-823266-m03_ha-823266.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 ssh -n ha-823266-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 ssh -n ha-823266 "sudo cat /home/docker/cp-test_ha-823266-m03_ha-823266.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 cp ha-823266-m03:/home/docker/cp-test.txt ha-823266-m02:/home/docker/cp-test_ha-823266-m03_ha-823266-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 ssh -n ha-823266-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 ssh -n ha-823266-m02 "sudo cat /home/docker/cp-test_ha-823266-m03_ha-823266-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 cp ha-823266-m03:/home/docker/cp-test.txt ha-823266-m04:/home/docker/cp-test_ha-823266-m03_ha-823266-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 ssh -n ha-823266-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 ssh -n ha-823266-m04 "sudo cat /home/docker/cp-test_ha-823266-m03_ha-823266-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 cp testdata/cp-test.txt ha-823266-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 ssh -n ha-823266-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 cp ha-823266-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2680078631/001/cp-test_ha-823266-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 ssh -n ha-823266-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 cp ha-823266-m04:/home/docker/cp-test.txt ha-823266:/home/docker/cp-test_ha-823266-m04_ha-823266.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 ssh -n ha-823266-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 ssh -n ha-823266 "sudo cat /home/docker/cp-test_ha-823266-m04_ha-823266.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 cp ha-823266-m04:/home/docker/cp-test.txt ha-823266-m02:/home/docker/cp-test_ha-823266-m04_ha-823266-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 ssh -n ha-823266-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 ssh -n ha-823266-m02 "sudo cat /home/docker/cp-test_ha-823266-m04_ha-823266-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 cp ha-823266-m04:/home/docker/cp-test.txt ha-823266-m03:/home/docker/cp-test_ha-823266-m04_ha-823266-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 ssh -n ha-823266-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 ssh -n ha-823266-m03 "sudo cat /home/docker/cp-test_ha-823266-m04_ha-823266-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (17.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-823266 node stop m02 --alsologtostderr -v 5: (12.043790085s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-823266 status --alsologtostderr -v 5: exit status 7 (694.92467ms)

                                                
                                                
-- stdout --
	ha-823266
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-823266-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-823266-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-823266-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:42:30.564739  203290 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:42:30.565067  203290 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:42:30.565082  203290 out.go:374] Setting ErrFile to fd 2...
	I1115 09:42:30.565088  203290 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:42:30.565377  203290 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-124770/.minikube/bin
	I1115 09:42:30.565573  203290 out.go:368] Setting JSON to false
	I1115 09:42:30.565607  203290 mustload.go:66] Loading cluster: ha-823266
	I1115 09:42:30.565738  203290 notify.go:221] Checking for updates...
	I1115 09:42:30.566063  203290 config.go:182] Loaded profile config "ha-823266": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1115 09:42:30.566081  203290 status.go:174] checking status of ha-823266 ...
	I1115 09:42:30.566611  203290 cli_runner.go:164] Run: docker container inspect ha-823266 --format={{.State.Status}}
	I1115 09:42:30.587452  203290 status.go:371] ha-823266 host status = "Running" (err=<nil>)
	I1115 09:42:30.587485  203290 host.go:66] Checking if "ha-823266" exists ...
	I1115 09:42:30.587827  203290 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-823266
	I1115 09:42:30.606025  203290 host.go:66] Checking if "ha-823266" exists ...
	I1115 09:42:30.606306  203290 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 09:42:30.606365  203290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-823266
	I1115 09:42:30.625343  203290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21894-124770/.minikube/machines/ha-823266/id_rsa Username:docker}
	I1115 09:42:30.718818  203290 ssh_runner.go:195] Run: systemctl --version
	I1115 09:42:30.725680  203290 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 09:42:30.739492  203290 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:42:30.796812  203290 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-15 09:42:30.787108071 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 09:42:30.797448  203290 kubeconfig.go:125] found "ha-823266" server: "https://192.168.49.254:8443"
	I1115 09:42:30.797479  203290 api_server.go:166] Checking apiserver status ...
	I1115 09:42:30.797521  203290 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 09:42:30.810291  203290 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1384/cgroup
	W1115 09:42:30.819073  203290 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1384/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1115 09:42:30.819136  203290 ssh_runner.go:195] Run: ls
	I1115 09:42:30.823093  203290 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1115 09:42:30.829187  203290 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1115 09:42:30.829214  203290 status.go:463] ha-823266 apiserver status = Running (err=<nil>)
	I1115 09:42:30.829227  203290 status.go:176] ha-823266 status: &{Name:ha-823266 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 09:42:30.829244  203290 status.go:174] checking status of ha-823266-m02 ...
	I1115 09:42:30.829493  203290 cli_runner.go:164] Run: docker container inspect ha-823266-m02 --format={{.State.Status}}
	I1115 09:42:30.848615  203290 status.go:371] ha-823266-m02 host status = "Stopped" (err=<nil>)
	I1115 09:42:30.848638  203290 status.go:384] host is not running, skipping remaining checks
	I1115 09:42:30.848645  203290 status.go:176] ha-823266-m02 status: &{Name:ha-823266-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 09:42:30.848669  203290 status.go:174] checking status of ha-823266-m03 ...
	I1115 09:42:30.848914  203290 cli_runner.go:164] Run: docker container inspect ha-823266-m03 --format={{.State.Status}}
	I1115 09:42:30.866689  203290 status.go:371] ha-823266-m03 host status = "Running" (err=<nil>)
	I1115 09:42:30.866722  203290 host.go:66] Checking if "ha-823266-m03" exists ...
	I1115 09:42:30.867098  203290 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-823266-m03
	I1115 09:42:30.886185  203290 host.go:66] Checking if "ha-823266-m03" exists ...
	I1115 09:42:30.886501  203290 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 09:42:30.886548  203290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-823266-m03
	I1115 09:42:30.906341  203290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21894-124770/.minikube/machines/ha-823266-m03/id_rsa Username:docker}
	I1115 09:42:30.998944  203290 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 09:42:31.012155  203290 kubeconfig.go:125] found "ha-823266" server: "https://192.168.49.254:8443"
	I1115 09:42:31.012182  203290 api_server.go:166] Checking apiserver status ...
	I1115 09:42:31.012213  203290 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 09:42:31.023843  203290 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1340/cgroup
	W1115 09:42:31.032442  203290 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1340/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1115 09:42:31.032496  203290 ssh_runner.go:195] Run: ls
	I1115 09:42:31.036289  203290 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1115 09:42:31.040906  203290 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1115 09:42:31.040930  203290 status.go:463] ha-823266-m03 apiserver status = Running (err=<nil>)
	I1115 09:42:31.040940  203290 status.go:176] ha-823266-m03 status: &{Name:ha-823266-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 09:42:31.040954  203290 status.go:174] checking status of ha-823266-m04 ...
	I1115 09:42:31.041288  203290 cli_runner.go:164] Run: docker container inspect ha-823266-m04 --format={{.State.Status}}
	I1115 09:42:31.060271  203290 status.go:371] ha-823266-m04 host status = "Running" (err=<nil>)
	I1115 09:42:31.060294  203290 host.go:66] Checking if "ha-823266-m04" exists ...
	I1115 09:42:31.060555  203290 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-823266-m04
	I1115 09:42:31.077350  203290 host.go:66] Checking if "ha-823266-m04" exists ...
	I1115 09:42:31.077640  203290 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 09:42:31.077699  203290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-823266-m04
	I1115 09:42:31.095740  203290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/21894-124770/.minikube/machines/ha-823266-m04/id_rsa Username:docker}
	I1115 09:42:31.186289  203290 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 09:42:31.198542  203290 status.go:176] ha-823266-m04 status: &{Name:ha-823266-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (9.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-823266 node start m02 --alsologtostderr -v 5: (8.116908503s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (9.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (98.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-823266 stop --alsologtostderr -v 5: (37.302285456s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 start --wait true --alsologtostderr -v 5
E1115 09:44:13.451622  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/functional-643455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:44:13.458102  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/functional-643455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:44:13.469480  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/functional-643455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:44:13.490886  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/functional-643455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:44:13.532242  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/functional-643455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:44:13.613712  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/functional-643455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:44:13.714302  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/addons-868580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:44:13.775788  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/functional-643455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:44:14.097482  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/functional-643455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:44:14.738763  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/functional-643455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:44:16.020890  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/functional-643455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:44:18.582712  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/functional-643455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-823266 start --wait true --alsologtostderr -v 5: (1m1.304309109s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (98.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 node delete m03 --alsologtostderr -v 5
E1115 09:44:23.704743  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/functional-643455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-823266 node delete m03 --alsologtostderr -v 5: (8.463930009s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 stop --alsologtostderr -v 5
E1115 09:44:33.946653  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/functional-643455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:44:54.428321  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/functional-643455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-823266 stop --alsologtostderr -v 5: (36.048665039s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-823266 status --alsologtostderr -v 5: exit status 7 (122.7124ms)

                                                
                                                
-- stdout --
	ha-823266
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-823266-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-823266-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:45:06.702856  219802 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:45:06.702988  219802 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:45:06.703001  219802 out.go:374] Setting ErrFile to fd 2...
	I1115 09:45:06.703007  219802 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:45:06.703245  219802 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-124770/.minikube/bin
	I1115 09:45:06.703414  219802 out.go:368] Setting JSON to false
	I1115 09:45:06.703446  219802 mustload.go:66] Loading cluster: ha-823266
	I1115 09:45:06.703509  219802 notify.go:221] Checking for updates...
	I1115 09:45:06.703826  219802 config.go:182] Loaded profile config "ha-823266": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1115 09:45:06.703844  219802 status.go:174] checking status of ha-823266 ...
	I1115 09:45:06.704352  219802 cli_runner.go:164] Run: docker container inspect ha-823266 --format={{.State.Status}}
	I1115 09:45:06.724244  219802 status.go:371] ha-823266 host status = "Stopped" (err=<nil>)
	I1115 09:45:06.724278  219802 status.go:384] host is not running, skipping remaining checks
	I1115 09:45:06.724286  219802 status.go:176] ha-823266 status: &{Name:ha-823266 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 09:45:06.724347  219802 status.go:174] checking status of ha-823266-m02 ...
	I1115 09:45:06.724719  219802 cli_runner.go:164] Run: docker container inspect ha-823266-m02 --format={{.State.Status}}
	I1115 09:45:06.743338  219802 status.go:371] ha-823266-m02 host status = "Stopped" (err=<nil>)
	I1115 09:45:06.743360  219802 status.go:384] host is not running, skipping remaining checks
	I1115 09:45:06.743367  219802 status.go:176] ha-823266-m02 status: &{Name:ha-823266-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 09:45:06.743389  219802 status.go:174] checking status of ha-823266-m04 ...
	I1115 09:45:06.743621  219802 cli_runner.go:164] Run: docker container inspect ha-823266-m04 --format={{.State.Status}}
	I1115 09:45:06.763222  219802 status.go:371] ha-823266-m04 host status = "Stopped" (err=<nil>)
	I1115 09:45:06.763277  219802 status.go:384] host is not running, skipping remaining checks
	I1115 09:45:06.763288  219802 status.go:176] ha-823266-m04 status: &{Name:ha-823266-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (57.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1115 09:45:35.390632  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/functional-643455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:45:36.781406  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/addons-868580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-823266 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (56.480723567s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (57.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (47.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-823266 node add --control-plane --alsologtostderr -v 5: (47.088566865s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-823266 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (47.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.90s)

                                                
                                    
x
+
TestJSONOutput/start/Command (35.9s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-603274 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-603274 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (35.899069687s)
--- PASS: TestJSONOutput/start/Command (35.90s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-603274 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-603274 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.85s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-603274 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-603274 --output=json --user=testUser: (5.854583292s)
--- PASS: TestJSONOutput/stop/Command (5.85s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-624489 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-624489 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (84.691277ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ce68f4a1-4eb8-46ed-aa97-15ed6c7f1ef1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-624489] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f38e1f7e-bc61-4100-b4be-3e460bfc23d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21894"}}
	{"specversion":"1.0","id":"6785a9d1-8f2a-43bd-9f22-abcb8b4bffcc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4c315b5b-5138-49c2-8c22-7b9dba92b17b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21894-124770/kubeconfig"}}
	{"specversion":"1.0","id":"5ad5ef62-af1f-4df9-b85c-bd4e42aa969d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-124770/.minikube"}}
	{"specversion":"1.0","id":"27ee7662-5afa-4380-a393-0f119379eaee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"0b8461d7-af1f-4df3-9bdb-1586d96b2122","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2ba98d8d-4195-489f-93fe-e9adbf25e260","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-624489" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-624489
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (31.7s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-214594 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-214594 --network=: (29.483101849s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-214594" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-214594
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-214594: (2.193908993s)
--- PASS: TestKicCustomNetwork/create_custom_network (31.70s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (22.9s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-157785 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-157785 --network=bridge: (20.871564173s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-157785" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-157785
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-157785: (2.01357317s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (22.90s)

                                                
                                    
x
+
TestKicExistingNetwork (24.54s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1115 09:48:44.060196  128258 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1115 09:48:44.077924  128258 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1115 09:48:44.078022  128258 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1115 09:48:44.078065  128258 cli_runner.go:164] Run: docker network inspect existing-network
W1115 09:48:44.095505  128258 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1115 09:48:44.095549  128258 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1115 09:48:44.095568  128258 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1115 09:48:44.095745  128258 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1115 09:48:44.114366  128258 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-55f590eda183 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:e2:12:01:f5:73:2f} reservation:<nil>}
I1115 09:48:44.114781  128258 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0016d3350}
I1115 09:48:44.114811  128258 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1115 09:48:44.114860  128258 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1115 09:48:44.164825  128258 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-850218 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-850218 --network=existing-network: (22.351216793s)
helpers_test.go:175: Cleaning up "existing-network-850218" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-850218
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-850218: (2.049448375s)
I1115 09:49:08.585850  128258 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (24.54s)

                                                
                                    
x
+
TestKicCustomSubnet (24.24s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-459215 --subnet=192.168.60.0/24
E1115 09:49:13.455321  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/functional-643455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:49:13.714114  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/addons-868580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-459215 --subnet=192.168.60.0/24: (22.047863887s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-459215 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-459215" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-459215
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-459215: (2.175487284s)
--- PASS: TestKicCustomSubnet (24.24s)

                                                
                                    
x
+
TestKicStaticIP (24.68s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-766529 --static-ip=192.168.200.200
E1115 09:49:41.154323  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/functional-643455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-766529 --static-ip=192.168.200.200: (22.380885721s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-766529 ip
helpers_test.go:175: Cleaning up "static-ip-766529" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-766529
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-766529: (2.140659709s)
--- PASS: TestKicStaticIP (24.68s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (47.47s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-556614 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-556614 --driver=docker  --container-runtime=containerd: (20.793845205s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-559427 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-559427 --driver=docker  --container-runtime=containerd: (20.751147385s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-556614
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-559427
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-559427" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-559427
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-559427: (2.317889233s)
helpers_test.go:175: Cleaning up "first-556614" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-556614
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-556614: (2.355534142s)
--- PASS: TestMinikubeProfile (47.47s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (4.51s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-913052 --memory=3072 --mount-string /tmp/TestMountStartserial3414464067/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-913052 --memory=3072 --mount-string /tmp/TestMountStartserial3414464067/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (3.512892045s)
--- PASS: TestMountStart/serial/StartWithMountFirst (4.51s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-913052 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (4.96s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-930396 --memory=3072 --mount-string /tmp/TestMountStartserial3414464067/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-930396 --memory=3072 --mount-string /tmp/TestMountStartserial3414464067/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (3.957165137s)
--- PASS: TestMountStart/serial/StartWithMountSecond (4.96s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-930396 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-913052 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-913052 --alsologtostderr -v=5: (1.673334946s)
--- PASS: TestMountStart/serial/DeleteFirst (1.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-930396 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-930396
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-930396: (1.26234248s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.91s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-930396
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-930396: (5.91321709s)
--- PASS: TestMountStart/serial/RestartStopped (6.91s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-930396 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (62.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-319967 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-319967 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m1.806673451s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-319967 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (62.29s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-319967 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-319967 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-319967 -- rollout status deployment/busybox: (2.844063753s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-319967 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-319967 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-319967 -- exec busybox-7b57f96db7-klkqv -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-319967 -- exec busybox-7b57f96db7-nwqn9 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-319967 -- exec busybox-7b57f96db7-klkqv -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-319967 -- exec busybox-7b57f96db7-nwqn9 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-319967 -- exec busybox-7b57f96db7-klkqv -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-319967 -- exec busybox-7b57f96db7-nwqn9 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.34s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-319967 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-319967 -- exec busybox-7b57f96db7-klkqv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-319967 -- exec busybox-7b57f96db7-klkqv -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-319967 -- exec busybox-7b57f96db7-nwqn9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-319967 -- exec busybox-7b57f96db7-nwqn9 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.80s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (23.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-319967 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-319967 -v=5 --alsologtostderr: (23.042714812s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-319967 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (23.68s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-319967 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.66s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-319967 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-319967 cp testdata/cp-test.txt multinode-319967:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-319967 ssh -n multinode-319967 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-319967 cp multinode-319967:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile914635938/001/cp-test_multinode-319967.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-319967 ssh -n multinode-319967 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-319967 cp multinode-319967:/home/docker/cp-test.txt multinode-319967-m02:/home/docker/cp-test_multinode-319967_multinode-319967-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-319967 ssh -n multinode-319967 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-319967 ssh -n multinode-319967-m02 "sudo cat /home/docker/cp-test_multinode-319967_multinode-319967-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-319967 cp multinode-319967:/home/docker/cp-test.txt multinode-319967-m03:/home/docker/cp-test_multinode-319967_multinode-319967-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-319967 ssh -n multinode-319967 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-319967 ssh -n multinode-319967-m03 "sudo cat /home/docker/cp-test_multinode-319967_multinode-319967-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-319967 cp testdata/cp-test.txt multinode-319967-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-319967 ssh -n multinode-319967-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-319967 cp multinode-319967-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile914635938/001/cp-test_multinode-319967-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-319967 ssh -n multinode-319967-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-319967 cp multinode-319967-m02:/home/docker/cp-test.txt multinode-319967:/home/docker/cp-test_multinode-319967-m02_multinode-319967.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-319967 ssh -n multinode-319967-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-319967 ssh -n multinode-319967 "sudo cat /home/docker/cp-test_multinode-319967-m02_multinode-319967.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-319967 cp multinode-319967-m02:/home/docker/cp-test.txt multinode-319967-m03:/home/docker/cp-test_multinode-319967-m02_multinode-319967-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-319967 ssh -n multinode-319967-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-319967 ssh -n multinode-319967-m03 "sudo cat /home/docker/cp-test_multinode-319967-m02_multinode-319967-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-319967 cp testdata/cp-test.txt multinode-319967-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-319967 ssh -n multinode-319967-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-319967 cp multinode-319967-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile914635938/001/cp-test_multinode-319967-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-319967 ssh -n multinode-319967-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-319967 cp multinode-319967-m03:/home/docker/cp-test.txt multinode-319967:/home/docker/cp-test_multinode-319967-m03_multinode-319967.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-319967 ssh -n multinode-319967-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-319967 ssh -n multinode-319967 "sudo cat /home/docker/cp-test_multinode-319967-m03_multinode-319967.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-319967 cp multinode-319967-m03:/home/docker/cp-test.txt multinode-319967-m02:/home/docker/cp-test_multinode-319967-m03_multinode-319967-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-319967 ssh -n multinode-319967-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-319967 ssh -n multinode-319967-m02 "sudo cat /home/docker/cp-test_multinode-319967-m03_multinode-319967-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.89s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-319967 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-319967 node stop m03: (1.270391917s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-319967 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-319967 status: exit status 7 (508.728728ms)

                                                
                                                
-- stdout --
	multinode-319967
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-319967-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-319967-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-319967 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-319967 status --alsologtostderr: exit status 7 (496.732915ms)

                                                
                                                
-- stdout --
	multinode-319967
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-319967-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-319967-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:52:50.887884  282754 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:52:50.888145  282754 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:52:50.888166  282754 out.go:374] Setting ErrFile to fd 2...
	I1115 09:52:50.888171  282754 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:52:50.888349  282754 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-124770/.minikube/bin
	I1115 09:52:50.888515  282754 out.go:368] Setting JSON to false
	I1115 09:52:50.888550  282754 mustload.go:66] Loading cluster: multinode-319967
	I1115 09:52:50.888676  282754 notify.go:221] Checking for updates...
	I1115 09:52:50.888974  282754 config.go:182] Loaded profile config "multinode-319967": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1115 09:52:50.888995  282754 status.go:174] checking status of multinode-319967 ...
	I1115 09:52:50.889791  282754 cli_runner.go:164] Run: docker container inspect multinode-319967 --format={{.State.Status}}
	I1115 09:52:50.908410  282754 status.go:371] multinode-319967 host status = "Running" (err=<nil>)
	I1115 09:52:50.908451  282754 host.go:66] Checking if "multinode-319967" exists ...
	I1115 09:52:50.908732  282754 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-319967
	I1115 09:52:50.926414  282754 host.go:66] Checking if "multinode-319967" exists ...
	I1115 09:52:50.926674  282754 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 09:52:50.926710  282754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-319967
	I1115 09:52:50.944022  282754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21894-124770/.minikube/machines/multinode-319967/id_rsa Username:docker}
	I1115 09:52:51.037022  282754 ssh_runner.go:195] Run: systemctl --version
	I1115 09:52:51.043484  282754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 09:52:51.056361  282754 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 09:52:51.116832  282754 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-11-15 09:52:51.106645679 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 09:52:51.117415  282754 kubeconfig.go:125] found "multinode-319967" server: "https://192.168.67.2:8443"
	I1115 09:52:51.117451  282754 api_server.go:166] Checking apiserver status ...
	I1115 09:52:51.117484  282754 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1115 09:52:51.129603  282754 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1353/cgroup
	W1115 09:52:51.138355  282754 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1353/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1115 09:52:51.138400  282754 ssh_runner.go:195] Run: ls
	I1115 09:52:51.142108  282754 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1115 09:52:51.146184  282754 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1115 09:52:51.146207  282754 status.go:463] multinode-319967 apiserver status = Running (err=<nil>)
	I1115 09:52:51.146218  282754 status.go:176] multinode-319967 status: &{Name:multinode-319967 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 09:52:51.146246  282754 status.go:174] checking status of multinode-319967-m02 ...
	I1115 09:52:51.146523  282754 cli_runner.go:164] Run: docker container inspect multinode-319967-m02 --format={{.State.Status}}
	I1115 09:52:51.164630  282754 status.go:371] multinode-319967-m02 host status = "Running" (err=<nil>)
	I1115 09:52:51.164656  282754 host.go:66] Checking if "multinode-319967-m02" exists ...
	I1115 09:52:51.164922  282754 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-319967-m02
	I1115 09:52:51.182527  282754 host.go:66] Checking if "multinode-319967-m02" exists ...
	I1115 09:52:51.182809  282754 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1115 09:52:51.182868  282754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-319967-m02
	I1115 09:52:51.201133  282754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21894-124770/.minikube/machines/multinode-319967-m02/id_rsa Username:docker}
	I1115 09:52:51.292531  282754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1115 09:52:51.305158  282754 status.go:176] multinode-319967-m02 status: &{Name:multinode-319967-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1115 09:52:51.305196  282754 status.go:174] checking status of multinode-319967-m03 ...
	I1115 09:52:51.305451  282754 cli_runner.go:164] Run: docker container inspect multinode-319967-m03 --format={{.State.Status}}
	I1115 09:52:51.324265  282754 status.go:371] multinode-319967-m03 host status = "Stopped" (err=<nil>)
	I1115 09:52:51.324290  282754 status.go:384] host is not running, skipping remaining checks
	I1115 09:52:51.324298  282754 status.go:176] multinode-319967-m03 status: &{Name:multinode-319967-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.28s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (6.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-319967 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-319967 node start m03 -v=5 --alsologtostderr: (6.202272919s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-319967 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (6.90s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (71.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-319967
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-319967
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-319967: (25.002618651s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-319967 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-319967 --wait=true -v=5 --alsologtostderr: (46.281589293s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-319967
--- PASS: TestMultiNode/serial/RestartKeepsNodes (71.41s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-319967 node delete m03
E1115 09:54:13.452372  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/functional-643455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 09:54:13.714093  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/addons-868580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-319967 node delete m03: (4.619307913s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-319967 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.22s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-319967 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-319967 stop: (23.804750143s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-319967 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-319967 status: exit status 7 (104.025421ms)

                                                
                                                
-- stdout --
	multinode-319967
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-319967-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-319967 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-319967 status --alsologtostderr: exit status 7 (98.69371ms)

                                                
                                                
-- stdout --
	multinode-319967
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-319967-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 09:54:38.833640  292587 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:54:38.833891  292587 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:54:38.833900  292587 out.go:374] Setting ErrFile to fd 2...
	I1115 09:54:38.833904  292587 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:54:38.834181  292587 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-124770/.minikube/bin
	I1115 09:54:38.834373  292587 out.go:368] Setting JSON to false
	I1115 09:54:38.834411  292587 mustload.go:66] Loading cluster: multinode-319967
	I1115 09:54:38.834526  292587 notify.go:221] Checking for updates...
	I1115 09:54:38.834813  292587 config.go:182] Loaded profile config "multinode-319967": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1115 09:54:38.834832  292587 status.go:174] checking status of multinode-319967 ...
	I1115 09:54:38.835308  292587 cli_runner.go:164] Run: docker container inspect multinode-319967 --format={{.State.Status}}
	I1115 09:54:38.853936  292587 status.go:371] multinode-319967 host status = "Stopped" (err=<nil>)
	I1115 09:54:38.854003  292587 status.go:384] host is not running, skipping remaining checks
	I1115 09:54:38.854017  292587 status.go:176] multinode-319967 status: &{Name:multinode-319967 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1115 09:54:38.854080  292587 status.go:174] checking status of multinode-319967-m02 ...
	I1115 09:54:38.854383  292587 cli_runner.go:164] Run: docker container inspect multinode-319967-m02 --format={{.State.Status}}
	I1115 09:54:38.872180  292587 status.go:371] multinode-319967-m02 host status = "Stopped" (err=<nil>)
	I1115 09:54:38.872209  292587 status.go:384] host is not running, skipping remaining checks
	I1115 09:54:38.872217  292587 status.go:176] multinode-319967-m02 status: &{Name:multinode-319967-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.01s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (49.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-319967 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-319967 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (48.847892308s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-319967 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (49.45s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (23.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-319967
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-319967-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-319967-m02 --driver=docker  --container-runtime=containerd: exit status 14 (79.237004ms)

                                                
                                                
-- stdout --
	* [multinode-319967-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21894
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21894-124770/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-124770/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-319967-m02' is duplicated with machine name 'multinode-319967-m02' in profile 'multinode-319967'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-319967-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-319967-m03 --driver=docker  --container-runtime=containerd: (21.396411255s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-319967
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-319967: exit status 80 (295.914666ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-319967 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-319967-m03 already exists in multinode-319967-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-319967-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-319967-m03: (1.950305384s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (23.78s)

                                                
                                    
x
+
TestPreload (104.28s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-244965 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-244965 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (42.175813534s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-244965 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-244965 image pull gcr.io/k8s-minikube/busybox: (1.848213077s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-244965
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-244965: (5.746098013s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-244965 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-244965 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (51.794950141s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-244965 image list
helpers_test.go:175: Cleaning up "test-preload-244965" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-244965
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-244965: (2.486778116s)
--- PASS: TestPreload (104.28s)

                                                
                                    
x
+
TestScheduledStopUnix (97.76s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-874152 --memory=3072 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-874152 --memory=3072 --driver=docker  --container-runtime=containerd: (21.727904005s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-874152 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1115 09:58:02.395909  311088 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:58:02.396023  311088 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:58:02.396028  311088 out.go:374] Setting ErrFile to fd 2...
	I1115 09:58:02.396032  311088 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:58:02.396246  311088 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-124770/.minikube/bin
	I1115 09:58:02.396467  311088 out.go:368] Setting JSON to false
	I1115 09:58:02.396567  311088 mustload.go:66] Loading cluster: scheduled-stop-874152
	I1115 09:58:02.396893  311088 config.go:182] Loaded profile config "scheduled-stop-874152": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1115 09:58:02.396958  311088 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/scheduled-stop-874152/config.json ...
	I1115 09:58:02.397178  311088 mustload.go:66] Loading cluster: scheduled-stop-874152
	I1115 09:58:02.397297  311088 config.go:182] Loaded profile config "scheduled-stop-874152": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-874152 -n scheduled-stop-874152
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-874152 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1115 09:58:02.792537  311239 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:58:02.792659  311239 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:58:02.792671  311239 out.go:374] Setting ErrFile to fd 2...
	I1115 09:58:02.792677  311239 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:58:02.792869  311239 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-124770/.minikube/bin
	I1115 09:58:02.793170  311239 out.go:368] Setting JSON to false
	I1115 09:58:02.793398  311239 daemonize_unix.go:73] killing process 311123 as it is an old scheduled stop
	I1115 09:58:02.793519  311239 mustload.go:66] Loading cluster: scheduled-stop-874152
	I1115 09:58:02.793917  311239 config.go:182] Loaded profile config "scheduled-stop-874152": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1115 09:58:02.794002  311239 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/scheduled-stop-874152/config.json ...
	I1115 09:58:02.794240  311239 mustload.go:66] Loading cluster: scheduled-stop-874152
	I1115 09:58:02.794381  311239 config.go:182] Loaded profile config "scheduled-stop-874152": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1115 09:58:02.798687  128258 retry.go:31] will retry after 75.754µs: open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/scheduled-stop-874152/pid: no such file or directory
I1115 09:58:02.799855  128258 retry.go:31] will retry after 176.591µs: open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/scheduled-stop-874152/pid: no such file or directory
I1115 09:58:02.801042  128258 retry.go:31] will retry after 233.7µs: open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/scheduled-stop-874152/pid: no such file or directory
I1115 09:58:02.802229  128258 retry.go:31] will retry after 264.817µs: open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/scheduled-stop-874152/pid: no such file or directory
I1115 09:58:02.803399  128258 retry.go:31] will retry after 552.909µs: open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/scheduled-stop-874152/pid: no such file or directory
I1115 09:58:02.804553  128258 retry.go:31] will retry after 852.457µs: open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/scheduled-stop-874152/pid: no such file or directory
I1115 09:58:02.805686  128258 retry.go:31] will retry after 697.824µs: open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/scheduled-stop-874152/pid: no such file or directory
I1115 09:58:02.806830  128258 retry.go:31] will retry after 1.752289ms: open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/scheduled-stop-874152/pid: no such file or directory
I1115 09:58:02.809026  128258 retry.go:31] will retry after 3.769949ms: open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/scheduled-stop-874152/pid: no such file or directory
I1115 09:58:02.813208  128258 retry.go:31] will retry after 4.020048ms: open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/scheduled-stop-874152/pid: no such file or directory
I1115 09:58:02.817355  128258 retry.go:31] will retry after 7.754781ms: open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/scheduled-stop-874152/pid: no such file or directory
I1115 09:58:02.825642  128258 retry.go:31] will retry after 10.228749ms: open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/scheduled-stop-874152/pid: no such file or directory
I1115 09:58:02.836892  128258 retry.go:31] will retry after 15.253579ms: open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/scheduled-stop-874152/pid: no such file or directory
I1115 09:58:02.853145  128258 retry.go:31] will retry after 21.733856ms: open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/scheduled-stop-874152/pid: no such file or directory
I1115 09:58:02.875445  128258 retry.go:31] will retry after 43.759212ms: open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/scheduled-stop-874152/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-874152 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-874152 -n scheduled-stop-874152
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-874152
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-874152 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1115 09:58:28.682977  312139 out.go:360] Setting OutFile to fd 1 ...
	I1115 09:58:28.683266  312139 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:58:28.683275  312139 out.go:374] Setting ErrFile to fd 2...
	I1115 09:58:28.683280  312139 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 09:58:28.683477  312139 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-124770/.minikube/bin
	I1115 09:58:28.683706  312139 out.go:368] Setting JSON to false
	I1115 09:58:28.683789  312139 mustload.go:66] Loading cluster: scheduled-stop-874152
	I1115 09:58:28.684149  312139 config.go:182] Loaded profile config "scheduled-stop-874152": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1115 09:58:28.684250  312139 profile.go:143] Saving config to /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/scheduled-stop-874152/config.json ...
	I1115 09:58:28.684443  312139 mustload.go:66] Loading cluster: scheduled-stop-874152
	I1115 09:58:28.684534  312139 config.go:182] Loaded profile config "scheduled-stop-874152": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
E1115 09:59:13.460043  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/functional-643455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-874152
E1115 09:59:13.714175  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/addons-868580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-874152: exit status 7 (81.95251ms)

                                                
                                                
-- stdout --
	scheduled-stop-874152
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-874152 -n scheduled-stop-874152
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-874152 -n scheduled-stop-874152: exit status 7 (81.435956ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-874152" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-874152
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-874152: (4.514046585s)
--- PASS: TestScheduledStopUnix (97.76s)

                                                
                                    
x
+
TestInsufficientStorage (12.2s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-674746 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-674746 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (9.711603189s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"35919057-ff25-4dad-af2e-23be5a0d750a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-674746] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9737f870-bdd4-4832-baed-ceac4a6e39ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21894"}}
	{"specversion":"1.0","id":"5e06a4c4-c932-47fd-9730-8fc08441c2ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9f50f7b9-0cef-4494-b5ab-9a69e7180c32","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21894-124770/kubeconfig"}}
	{"specversion":"1.0","id":"1ac7dc4d-f17a-4d3f-8b48-c0fba2280b44","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-124770/.minikube"}}
	{"specversion":"1.0","id":"ef43196e-e35e-4a76-8711-e37b616e46a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"f6a82297-3eb4-432e-aec6-c5b70c9a317b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a9b42bf9-0c62-4ba3-85b1-493c1a5206db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"0d8bb8bb-34a8-41f8-82d9-2a251c6388fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"60529c48-e0e0-44f9-a53a-8384cd001852","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"ec0073df-dc02-4797-8edc-2f3a0bb980d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"f76674f8-b6b0-438f-a6a5-87fff1a59965","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-674746\" primary control-plane node in \"insufficient-storage-674746\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"94472bbf-a72f-43e1-98d7-1fbc79053073","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1761985721-21837 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"f337d61b-157e-42c6-9d9b-91dde8b0065e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"b0c8ef11-fe61-41dd-a976-30b53829751d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-674746 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-674746 --output=json --layout=cluster: exit status 7 (293.661762ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-674746","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-674746","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1115 09:59:28.357406  314443 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-674746" does not appear in /home/jenkins/minikube-integration/21894-124770/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-674746 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-674746 --output=json --layout=cluster: exit status 7 (287.875485ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-674746","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-674746","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1115 09:59:28.646037  314555 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-674746" does not appear in /home/jenkins/minikube-integration/21894-124770/kubeconfig
	E1115 09:59:28.656538  314555 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/insufficient-storage-674746/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-674746" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-674746
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-674746: (1.908316713s)
--- PASS: TestInsufficientStorage (12.20s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (45.62s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.2351324217 start -p running-upgrade-645638 --memory=3072 --vm-driver=docker  --container-runtime=containerd
E1115 10:02:16.782792  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/addons-868580/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.2351324217 start -p running-upgrade-645638 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (22.216759925s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-645638 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-645638 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (20.991198345s)
helpers_test.go:175: Cleaning up "running-upgrade-645638" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-645638
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-645638: (2.049170575s)
--- PASS: TestRunningBinaryUpgrade (45.62s)

                                                
                                    
x
+
TestKubernetesUpgrade (324.85s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-258521 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1115 10:00:36.520720  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/functional-643455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-258521 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (30.104849282s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-258521
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-258521: (1.381012196s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-258521 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-258521 status --format={{.Host}}: exit status 7 (104.439412ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-258521 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-258521 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m37.778474352s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-258521 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-258521 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-258521 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 106 (102.209135ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-258521] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21894
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21894-124770/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-124770/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-258521
	    minikube start -p kubernetes-upgrade-258521 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2585212 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-258521 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-258521 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-258521 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (12.623456171s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-258521" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-258521
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-258521: (2.687798195s)
--- PASS: TestKubernetesUpgrade (324.85s)

                                                
                                    
x
+
TestMissingContainerUpgrade (103.01s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.2326668680 start -p missing-upgrade-368254 --memory=3072 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.2326668680 start -p missing-upgrade-368254 --memory=3072 --driver=docker  --container-runtime=containerd: (48.009255863s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-368254
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-368254
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-368254 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-368254 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (51.912765193s)
helpers_test.go:175: Cleaning up "missing-upgrade-368254" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-368254
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-368254: (2.010562667s)
--- PASS: TestMissingContainerUpgrade (103.01s)

                                                
                                    
x
+
TestPause/serial/Start (52.62s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-338148 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-338148 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (52.62459777s)
--- PASS: TestPause/serial/Start (52.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-386485 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-386485 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (102.342129ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-386485] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21894
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21894-124770/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-124770/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (30.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-386485 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-386485 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (30.036766125s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-386485 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (30.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-386485 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-386485 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (16.050535708s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-386485 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-386485 status -o json: exit status 2 (328.901842ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-386485","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-386485
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-386485: (2.097878856s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.48s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.58s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.58s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (99.66s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3029958974 start -p stopped-upgrade-338294 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3029958974 start -p stopped-upgrade-338294 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (1m3.905290127s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3029958974 -p stopped-upgrade-338294 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3029958974 -p stopped-upgrade-338294 stop: (1.274512477s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-338294 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-338294 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (34.481716886s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (99.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-386485 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-386485 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (8.325617017s)
--- PASS: TestNoKubernetes/serial/Start (8.33s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.29s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-338148 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-338148 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.272793905s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21894-124770/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-386485 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-386485 "sudo systemctl is-active --quiet service kubelet": exit status 1 (305.07754ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-386485
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-386485: (2.166642024s)
--- PASS: TestNoKubernetes/serial/Stop (2.17s)

                                                
                                    
x
+
TestPause/serial/Pause (0.82s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-338148 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.82s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.36s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-338148 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-338148 --output=json --layout=cluster: exit status 2 (361.572634ms)

                                                
                                                
-- stdout --
	{"Name":"pause-338148","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-338148","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.36s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.84s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-338148 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-386485 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-386485 --driver=docker  --container-runtime=containerd: (6.64583201s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.65s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.84s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-338148 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.84s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.4s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-338148 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-338148 --alsologtostderr -v=5: (3.399695312s)
--- PASS: TestPause/serial/DeletePaused (3.40s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.49s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-338148
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-338148: exit status 1 (21.600738ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-338148: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-386485 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-386485 "sudo systemctl is-active --quiet service kubelet": exit status 1 (337.688231ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.22s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-338294
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-338294: (1.224250958s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-035027 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-035027 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (171.422503ms)

                                                
                                                
-- stdout --
	* [false-035027] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21894
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21894-124770/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-124770/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1115 10:02:10.245455  357472 out.go:360] Setting OutFile to fd 1 ...
	I1115 10:02:10.245776  357472 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:02:10.245787  357472 out.go:374] Setting ErrFile to fd 2...
	I1115 10:02:10.245793  357472 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1115 10:02:10.246032  357472 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21894-124770/.minikube/bin
	I1115 10:02:10.246576  357472 out.go:368] Setting JSON to false
	I1115 10:02:10.248030  357472 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":17080,"bootTime":1763183850,"procs":308,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1115 10:02:10.248150  357472 start.go:143] virtualization: kvm guest
	I1115 10:02:10.250502  357472 out.go:179] * [false-035027] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1115 10:02:10.251839  357472 out.go:179]   - MINIKUBE_LOCATION=21894
	I1115 10:02:10.251847  357472 notify.go:221] Checking for updates...
	I1115 10:02:10.254142  357472 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1115 10:02:10.255441  357472 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21894-124770/kubeconfig
	I1115 10:02:10.256663  357472 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21894-124770/.minikube
	I1115 10:02:10.257997  357472 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1115 10:02:10.259827  357472 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1115 10:02:10.261669  357472 config.go:182] Loaded profile config "cert-expiration-504206": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1115 10:02:10.261799  357472 config.go:182] Loaded profile config "force-systemd-flag-826563": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1115 10:02:10.261896  357472 config.go:182] Loaded profile config "kubernetes-upgrade-258521": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1115 10:02:10.262018  357472 driver.go:422] Setting default libvirt URI to qemu:///system
	I1115 10:02:10.287855  357472 docker.go:124] docker version: linux-29.0.1:Docker Engine - Community
	I1115 10:02:10.288019  357472 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1115 10:02:10.349563  357472 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-15 10:02:10.338650037 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:29.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1115 10:02:10.349708  357472 docker.go:319] overlay module found
	I1115 10:02:10.351475  357472 out.go:179] * Using the docker driver based on user configuration
	I1115 10:02:10.352599  357472 start.go:309] selected driver: docker
	I1115 10:02:10.352619  357472 start.go:930] validating driver "docker" against <nil>
	I1115 10:02:10.352631  357472 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1115 10:02:10.354475  357472 out.go:203] 
	W1115 10:02:10.355705  357472 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1115 10:02:10.356971  357472 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-035027 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-035027

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-035027

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-035027

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-035027

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-035027

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-035027

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-035027

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-035027

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-035027

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-035027

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035027"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035027"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035027"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-035027

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035027"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035027"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-035027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-035027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-035027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-035027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-035027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-035027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-035027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-035027" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035027"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035027"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035027"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035027"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035027"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-035027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-035027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-035027" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035027"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035027"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035027"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035027"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035027"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21894-124770/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 15 Nov 2025 10:01:09 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-504206
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21894-124770/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 15 Nov 2025 10:02:10 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: force-systemd-flag-826563
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21894-124770/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 15 Nov 2025 10:01:20 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: kubernetes-upgrade-258521
contexts:
- context:
cluster: cert-expiration-504206
extensions:
- extension:
last-update: Sat, 15 Nov 2025 10:01:09 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-504206
name: cert-expiration-504206
- context:
cluster: force-systemd-flag-826563
extensions:
- extension:
last-update: Sat, 15 Nov 2025 10:02:10 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: force-systemd-flag-826563
name: force-systemd-flag-826563
- context:
cluster: kubernetes-upgrade-258521
user: kubernetes-upgrade-258521
name: kubernetes-upgrade-258521
current-context: force-systemd-flag-826563
kind: Config
users:
- name: cert-expiration-504206
user:
client-certificate: /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/cert-expiration-504206/client.crt
client-key: /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/cert-expiration-504206/client.key
- name: force-systemd-flag-826563
user:
client-certificate: /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/force-systemd-flag-826563/client.crt
client-key: /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/force-systemd-flag-826563/client.key
- name: kubernetes-upgrade-258521
user:
client-certificate: /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/kubernetes-upgrade-258521/client.crt
client-key: /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/kubernetes-upgrade-258521/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-035027

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035027"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035027"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035027"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035027"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035027"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035027"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035027"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035027"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035027"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035027"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035027"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035027"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035027"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035027"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035027"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035027"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035027"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-035027"

                                                
                                                
----------------------- debugLogs end: false-035027 [took: 3.521283719s] --------------------------------
helpers_test.go:175: Cleaning up "false-035027" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-035027
--- PASS: TestNetworkPlugins/group/false (4.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (42.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-035027 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-035027 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (42.092723018s)
--- PASS: TestNetworkPlugins/group/auto/Start (42.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (42.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-035027 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-035027 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (42.929164842s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (42.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-035027 "pgrep -a kubelet"
I1115 10:03:27.214572  128258 config.go:182] Loaded profile config "auto-035027": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-035027 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-gh94z" [19f5b97d-291a-48fc-be73-01151b4213cb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-gh94z" [19f5b97d-291a-48fc-be73-01151b4213cb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.003355775s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-035027 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-035027 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-035027 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-x4jp6" [6b676c52-db31-4129-97af-c659cbb25762] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003915195s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-035027 "pgrep -a kubelet"
I1115 10:03:49.305983  128258 config.go:182] Loaded profile config "kindnet-035027": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (8.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-035027 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-tp8kt" [c56f06c1-7c76-40af-845b-275fd7c6d135] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-tp8kt" [c56f06c1-7c76-40af-845b-275fd7c6d135] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 8.004525843s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (8.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (48.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-035027 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-035027 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (48.451389242s)
--- PASS: TestNetworkPlugins/group/calico/Start (48.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-035027 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-035027 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-035027 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (55.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-035027 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-035027 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (55.952580817s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (55.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (43.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-035027 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-035027 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (43.51040925s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (43.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-xjrn5" [36ca94a4-5df1-446f-9d2c-2b9a6cbee1d5] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-xjrn5" [36ca94a4-5df1-446f-9d2c-2b9a6cbee1d5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003956026s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-035027 "pgrep -a kubelet"
I1115 10:04:50.832871  128258 config.go:182] Loaded profile config "calico-035027": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-035027 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-t8h57" [d49f8f9d-1f00-4994-b57f-25f7c3da1cf9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-t8h57" [d49f8f9d-1f00-4994-b57f-25f7c3da1cf9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.004075488s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-035027 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-035027 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-035027 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-035027 "pgrep -a kubelet"
I1115 10:05:02.973338  128258 config.go:182] Loaded profile config "enable-default-cni-035027": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-035027 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-rwkqc" [17635063-e719-4c31-a2c6-70d5b265aea5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-rwkqc" [17635063-e719-4c31-a2c6-70d5b265aea5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.003629478s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-035027 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-035027 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-035027 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-035027 "pgrep -a kubelet"
I1115 10:05:14.595907  128258 config.go:182] Loaded profile config "custom-flannel-035027": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-035027 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-6f6lj" [959d3b19-140f-41fe-b041-08641e4aa083] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-6f6lj" [959d3b19-140f-41fe-b041-08641e4aa083] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.003464361s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (56.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-035027 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-035027 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (56.063081658s)
--- PASS: TestNetworkPlugins/group/flannel/Start (56.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-035027 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-035027 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-035027 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (76.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-035027 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-035027 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m16.422692502s)
--- PASS: TestNetworkPlugins/group/bridge/Start (76.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (53.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-732320 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-732320 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (53.090993788s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (53.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (48.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-781837 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-781837 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (48.405396706s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (48.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-42hwj" [563a7d6f-2030-4cd6-9353-4973ea717950] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004748068s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-035027 "pgrep -a kubelet"
I1115 10:06:23.215717  128258 config.go:182] Loaded profile config "flannel-035027": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-035027 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-jlhrr" [8a605cb2-08f6-4359-bac4-a5717c346e58] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-jlhrr" [8a605cb2-08f6-4359-bac4-a5717c346e58] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003517078s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-035027 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-035027 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-035027 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-732320 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [c21b2c3a-5af1-4519-93d4-f69c5158b264] Pending
helpers_test.go:352: "busybox" [c21b2c3a-5af1-4519-93d4-f69c5158b264] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [c21b2c3a-5af1-4519-93d4-f69c5158b264] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.004728498s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-732320 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-035027 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-732320 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
I1115 10:06:49.104771  128258 config.go:182] Loaded profile config "bridge-035027": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-732320 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.004941407s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-732320 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-035027 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-pxwwx" [e22404d3-0143-4e2d-b5c0-bace8920f281] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-pxwwx" [e22404d3-0143-4e2d-b5c0-bace8920f281] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.003877468s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-781837 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [8f87c721-329b-4928-ae81-b06c2f51cf6f] Pending
helpers_test.go:352: "busybox" [8f87c721-329b-4928-ae81-b06c2f51cf6f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [8f87c721-329b-4928-ae81-b06c2f51cf6f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003719512s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-781837 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-732320 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-732320 --alsologtostderr -v=3: (12.18014173s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (39.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-869219 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-869219 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (39.931628579s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (39.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-035027 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-035027 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-035027 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)
E1115 10:08:27.431951  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/auto-035027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:08:27.439704  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/auto-035027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:08:27.451583  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/auto-035027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:08:27.473049  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/auto-035027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:08:27.514538  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/auto-035027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:08:27.596549  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/auto-035027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:08:27.758129  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/auto-035027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:08:28.079855  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/auto-035027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-781837 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-781837 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-781837 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-781837 --alsologtostderr -v=3: (12.110341224s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-732320 -n old-k8s-version-732320
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-732320 -n old-k8s-version-732320: exit status 7 (91.644919ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-732320 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (49.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-732320 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-732320 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (49.05367418s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-732320 -n old-k8s-version-732320
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (49.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-781837 -n no-preload-781837
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-781837 -n no-preload-781837: exit status 7 (94.669387ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-781837 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (47.77s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-781837 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-781837 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (47.419056478s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-781837 -n no-preload-781837
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (47.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (45.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-343353 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-343353 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (45.376925154s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (45.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-869219 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [2c6b950d-9a1e-4700-91c9-78771136f900] Pending
helpers_test.go:352: "busybox" [2c6b950d-9a1e-4700-91c9-78771136f900] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [2c6b950d-9a1e-4700-91c9-78771136f900] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.005236605s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-869219 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-869219 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-869219 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-869219 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-869219 --alsologtostderr -v=3: (12.131441822s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-9nq92" [ee477d9e-e143-4722-a24c-3c8ececabebf] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003405895s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-869219 -n embed-certs-869219
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-869219 -n embed-certs-869219: exit status 7 (83.384914ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-869219 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (44.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-869219 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-869219 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (44.314581888s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-869219 -n embed-certs-869219
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (44.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-9nq92" [ee477d9e-e143-4722-a24c-3c8ececabebf] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00441644s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-732320 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-5d4gm" [666d8f5e-7bfc-4b13-b76a-211c157752d5] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004432175s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-732320 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-732320 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-732320 -n old-k8s-version-732320
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-732320 -n old-k8s-version-732320: exit status 2 (496.340792ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-732320 -n old-k8s-version-732320
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-732320 -n old-k8s-version-732320: exit status 2 (401.04607ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-732320 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-732320 -n old-k8s-version-732320
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-732320 -n old-k8s-version-732320
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-343353 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [1c1061c7-0efb-4ec0-9a93-2908a760a891] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [1c1061c7-0efb-4ec0-9a93-2908a760a891] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004972311s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-343353 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-5d4gm" [666d8f5e-7bfc-4b13-b76a-211c157752d5] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003988178s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-781837 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (28.93s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-091688 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-091688 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (28.931892677s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (28.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-781837 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-781837 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-781837 -n no-preload-781837
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-781837 -n no-preload-781837: exit status 2 (382.806766ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-781837 -n no-preload-781837
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-781837 -n no-preload-781837: exit status 2 (443.770784ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-781837 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-781837 -n no-preload-781837
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-781837 -n no-preload-781837
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-343353 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-343353 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.07276425s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-343353 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (14.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-343353 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-343353 --alsologtostderr -v=3: (14.412685437s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (14.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-343353 -n default-k8s-diff-port-343353
E1115 10:08:28.721563  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/auto-035027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-343353 -n default-k8s-diff-port-343353: exit status 7 (100.933792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-343353 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (49.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-343353 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1115 10:08:30.003537  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/auto-035027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:08:32.564882  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/auto-035027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:08:37.686610  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/auto-035027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-343353 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (49.479356043s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-343353 -n default-k8s-diff-port-343353
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (49.80s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-091688 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-091688 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.005399945s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-vz5rx" [42954a36-ff93-484c-9b49-0925fd636da8] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004552743s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-091688 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-091688 --alsologtostderr -v=3: (1.427597404s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-091688 -n newest-cni-091688
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-091688 -n newest-cni-091688: exit status 7 (84.228971ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-091688 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (11.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-091688 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1115 10:08:43.003174  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/kindnet-035027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:08:43.009803  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/kindnet-035027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:08:43.021185  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/kindnet-035027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:08:43.042716  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/kindnet-035027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:08:43.084171  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/kindnet-035027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:08:43.165661  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/kindnet-035027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:08:43.327401  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/kindnet-035027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:08:43.649335  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/kindnet-035027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:08:44.290957  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/kindnet-035027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:08:45.572430  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/kindnet-035027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-091688 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (10.796776101s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-091688 -n newest-cni-091688
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (11.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-vz5rx" [42954a36-ff93-484c-9b49-0925fd636da8] Running
E1115 10:08:47.928895  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/auto-035027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1115 10:08:48.134553  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/kindnet-035027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003435552s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-869219 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-869219 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-869219 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-869219 -n embed-certs-869219
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-869219 -n embed-certs-869219: exit status 2 (363.675052ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-869219 -n embed-certs-869219
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-869219 -n embed-certs-869219: exit status 2 (344.867988ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-869219 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-869219 -n embed-certs-869219
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-869219 -n embed-certs-869219
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-091688 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.97s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-091688 --alsologtostderr -v=1
E1115 10:08:53.256844  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/kindnet-035027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-091688 -n newest-cni-091688
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-091688 -n newest-cni-091688: exit status 2 (371.455536ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-091688 -n newest-cni-091688
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-091688 -n newest-cni-091688: exit status 2 (337.308182ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-091688 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-091688 -n newest-cni-091688
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-091688 -n newest-cni-091688
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-9ckqf" [28c94163-1da5-4d24-a288-03a614a2cbe2] Running
E1115 10:09:23.980046  128258 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/kindnet-035027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003307312s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-9ckqf" [28c94163-1da5-4d24-a288-03a614a2cbe2] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003598838s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-343353 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-343353 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.72s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-343353 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-343353 -n default-k8s-diff-port-343353
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-343353 -n default-k8s-diff-port-343353: exit status 2 (311.472829ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-343353 -n default-k8s-diff-port-343353
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-343353 -n default-k8s-diff-port-343353: exit status 2 (313.310587ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-343353 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-343353 -n default-k8s-diff-port-343353
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-343353 -n default-k8s-diff-port-343353
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.72s)

                                                
                                    

Test skip (26/332)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-035027 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-035027

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-035027

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-035027

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-035027

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-035027

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-035027

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-035027

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-035027

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-035027

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-035027

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035027"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035027"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035027"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-035027

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035027"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035027"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-035027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-035027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-035027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-035027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-035027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-035027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-035027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-035027" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035027"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035027"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035027"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035027"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035027"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-035027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-035027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-035027" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035027"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035027"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035027"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035027"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035027"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21894-124770/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 15 Nov 2025 10:01:09 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-504206
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21894-124770/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 15 Nov 2025 10:01:20 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: kubernetes-upgrade-258521
contexts:
- context:
cluster: cert-expiration-504206
extensions:
- extension:
last-update: Sat, 15 Nov 2025 10:01:09 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-504206
name: cert-expiration-504206
- context:
cluster: kubernetes-upgrade-258521
user: kubernetes-upgrade-258521
name: kubernetes-upgrade-258521
current-context: ""
kind: Config
users:
- name: cert-expiration-504206
user:
client-certificate: /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/cert-expiration-504206/client.crt
client-key: /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/cert-expiration-504206/client.key
- name: kubernetes-upgrade-258521
user:
client-certificate: /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/kubernetes-upgrade-258521/client.crt
client-key: /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/kubernetes-upgrade-258521/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-035027

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035027"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035027"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035027"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035027"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035027"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035027"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035027"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035027"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035027"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035027"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035027"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035027"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035027"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035027"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035027"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035027"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035027"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-035027"

                                                
                                                
----------------------- debugLogs end: kubenet-035027 [took: 3.594569107s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-035027" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-035027
--- SKIP: TestNetworkPlugins/group/kubenet (3.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-035027 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-035027

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-035027

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-035027

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-035027

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-035027

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-035027

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-035027

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-035027

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-035027

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-035027

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035027"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035027"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035027"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-035027

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035027"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035027"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-035027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-035027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-035027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-035027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-035027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-035027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-035027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-035027" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035027"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035027"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035027"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035027"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035027"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-035027

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-035027

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-035027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-035027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-035027

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-035027

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-035027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-035027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-035027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-035027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-035027" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035027"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035027"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035027"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035027"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035027"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21894-124770/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 15 Nov 2025 10:01:09 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-504206
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21894-124770/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 15 Nov 2025 10:01:20 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: kubernetes-upgrade-258521
contexts:
- context:
cluster: cert-expiration-504206
extensions:
- extension:
last-update: Sat, 15 Nov 2025 10:01:09 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-504206
name: cert-expiration-504206
- context:
cluster: kubernetes-upgrade-258521
user: kubernetes-upgrade-258521
name: kubernetes-upgrade-258521
current-context: ""
kind: Config
users:
- name: cert-expiration-504206
user:
client-certificate: /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/cert-expiration-504206/client.crt
client-key: /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/cert-expiration-504206/client.key
- name: kubernetes-upgrade-258521
user:
client-certificate: /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/kubernetes-upgrade-258521/client.crt
client-key: /home/jenkins/minikube-integration/21894-124770/.minikube/profiles/kubernetes-upgrade-258521/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-035027

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035027"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035027"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035027"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035027"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035027"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035027"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035027"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035027"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035027"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035027"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035027"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035027"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035027"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035027"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035027"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035027"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035027"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-035027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035027"

                                                
                                                
----------------------- debugLogs end: cilium-035027 [took: 4.015779373s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-035027" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-035027
--- SKIP: TestNetworkPlugins/group/cilium (4.38s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-354419" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-354419
--- SKIP: TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                    
Copied to clipboard