Test Report: Docker_Linux 22021

                    
                      714686ca7bbd77e34d847e892f53d4af2ede556f:2025-12-02:42609
                    
                

Test fail (2/435)

Order failed test Duration
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 302.07
208 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 602.56
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (302.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-169724 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-169724 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-169724 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-169724 --alsologtostderr -v=1] stderr:
I1202 15:21:56.913625  643687 out.go:360] Setting OutFile to fd 1 ...
I1202 15:21:56.913759  643687 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:21:56.913770  643687 out.go:374] Setting ErrFile to fd 2...
I1202 15:21:56.913778  643687 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:21:56.914127  643687 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-563346/.minikube/bin
I1202 15:21:56.914519  643687 mustload.go:66] Loading cluster: functional-169724
I1202 15:21:56.914952  643687 config.go:182] Loaded profile config "functional-169724": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1202 15:21:56.915387  643687 cli_runner.go:164] Run: docker container inspect functional-169724 --format={{.State.Status}}
I1202 15:21:56.940802  643687 host.go:66] Checking if "functional-169724" exists ...
I1202 15:21:56.941274  643687 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1202 15:21:57.013817  643687 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-02 15:21:57.001790665 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1202 15:21:57.013989  643687 api_server.go:166] Checking apiserver status ...
I1202 15:21:57.014043  643687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1202 15:21:57.014096  643687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-169724
I1202 15:21:57.034065  643687 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/22021-563346/.minikube/machines/functional-169724/id_rsa Username:docker}
I1202 15:21:57.144051  643687 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/10082/cgroup
W1202 15:21:57.154764  643687 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/10082/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1202 15:21:57.154845  643687 ssh_runner.go:195] Run: ls
I1202 15:21:57.159174  643687 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I1202 15:21:57.164476  643687 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W1202 15:21:57.164535  643687 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1202 15:21:57.164705  643687 config.go:182] Loaded profile config "functional-169724": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1202 15:21:57.164723  643687 addons.go:70] Setting dashboard=true in profile "functional-169724"
I1202 15:21:57.164730  643687 addons.go:239] Setting addon dashboard=true in "functional-169724"
I1202 15:21:57.164755  643687 host.go:66] Checking if "functional-169724" exists ...
I1202 15:21:57.165082  643687 cli_runner.go:164] Run: docker container inspect functional-169724 --format={{.State.Status}}
I1202 15:21:57.238399  643687 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1202 15:21:57.280084  643687 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1202 15:21:57.299407  643687 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1202 15:21:57.299462  643687 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1202 15:21:57.299550  643687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-169724
I1202 15:21:57.322639  643687 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/22021-563346/.minikube/machines/functional-169724/id_rsa Username:docker}
I1202 15:21:57.437880  643687 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1202 15:21:57.437909  643687 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1202 15:21:57.451798  643687 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1202 15:21:57.451828  643687 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1202 15:21:57.466994  643687 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1202 15:21:57.467028  643687 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1202 15:21:57.481466  643687 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1202 15:21:57.481490  643687 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1202 15:21:57.495648  643687 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
I1202 15:21:57.495677  643687 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1202 15:21:57.509104  643687 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1202 15:21:57.509135  643687 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1202 15:21:57.522545  643687 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1202 15:21:57.522569  643687 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1202 15:21:57.536043  643687 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1202 15:21:57.536066  643687 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1202 15:21:57.549292  643687 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1202 15:21:57.549324  643687 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1202 15:21:57.563076  643687 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1202 15:21:58.080172  643687 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-169724 addons enable metrics-server

                                                
                                                
I1202 15:21:58.081541  643687 addons.go:202] Writing out "functional-169724" config to set dashboard=true...
W1202 15:21:58.081806  643687 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1202 15:21:58.082510  643687 kapi.go:59] client config for functional-169724: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22021-563346/.minikube/profiles/functional-169724/client.crt", KeyFile:"/home/jenkins/minikube-integration/22021-563346/.minikube/profiles/functional-169724/client.key", CAFile:"/home/jenkins/minikube-integration/22021-563346/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2815480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1202 15:21:58.082995  643687 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1202 15:21:58.083012  643687 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1202 15:21:58.083017  643687 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1202 15:21:58.083021  643687 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1202 15:21:58.083025  643687 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1202 15:21:58.092149  643687 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  64c9067d-1c50-47a9-bfb2-e9297465827a 868 0 2025-12-02 15:21:58 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-12-02 15:21:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.104.110.134,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.104.110.134],IPFamilies:[IPv4],AllocateLoadBalan
cerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1202 15:21:58.092348  643687 out.go:285] * Launching proxy ...
* Launching proxy ...
I1202 15:21:58.092410  643687 dashboard.go:154] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-169724 proxy --port 36195]
I1202 15:21:58.092705  643687 dashboard.go:159] Waiting for kubectl to output host:port ...
I1202 15:21:58.146720  643687 dashboard.go:177] proxy stdout: Starting to serve on 127.0.0.1:36195
W1202 15:21:58.146793  643687 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I1202 15:21:58.156069  643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[43088be5-e7d4-445a-ab08-daed4da90f4d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:21:58 GMT]] Body:0xc00081ef40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005eef00 TLS:<nil>}
I1202 15:21:58.156156  643687 retry.go:31] will retry after 142.839µs: Temporary Error: unexpected response code: 503
I1202 15:21:58.159945  643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[84563250-8083-41fb-aec1-c99b89115388] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:21:58 GMT]] Body:0xc00090bbc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c0780 TLS:<nil>}
I1202 15:21:58.160000  643687 retry.go:31] will retry after 194.404µs: Temporary Error: unexpected response code: 503
I1202 15:21:58.163490  643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c3ad115c-1a23-4ca0-ad6b-ea47f9dc4c86] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:21:58 GMT]] Body:0xc00081f040 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005ef040 TLS:<nil>}
I1202 15:21:58.163540  643687 retry.go:31] will retry after 197.614µs: Temporary Error: unexpected response code: 503
I1202 15:21:58.166832  643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cc2b120b-6cf3-4a19-9bc7-3c695639164d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:21:58 GMT]] Body:0xc00090bcc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c08c0 TLS:<nil>}
I1202 15:21:58.166892  643687 retry.go:31] will retry after 250.117µs: Temporary Error: unexpected response code: 503
I1202 15:21:58.170742  643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d2f624dd-251f-494e-8a2d-db5d7bd6614a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:21:58 GMT]] Body:0xc00083c780 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005ef180 TLS:<nil>}
I1202 15:21:58.170809  643687 retry.go:31] will retry after 611.23µs: Temporary Error: unexpected response code: 503
I1202 15:21:58.174586  643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[628b1589-9cc8-4621-9520-c92fef1f8d18] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:21:58 GMT]] Body:0xc00090bdc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000349180 TLS:<nil>}
I1202 15:21:58.174684  643687 retry.go:31] will retry after 956.894µs: Temporary Error: unexpected response code: 503
I1202 15:21:58.178319  643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3072942f-d6c2-4a62-a421-472e200fdd06] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:21:58 GMT]] Body:0xc00090be40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c0a00 TLS:<nil>}
I1202 15:21:58.178386  643687 retry.go:31] will retry after 688.898µs: Temporary Error: unexpected response code: 503
I1202 15:21:58.181916  643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8faa2677-ac55-4eed-9540-6575d2623121] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:21:58 GMT]] Body:0xc00083c840 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005ef2c0 TLS:<nil>}
I1202 15:21:58.182003  643687 retry.go:31] will retry after 1.320838ms: Temporary Error: unexpected response code: 503
I1202 15:21:58.186748  643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ec7c767c-a5fd-4d96-913b-fc5d1181e585] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:21:58 GMT]] Body:0xc00090bf40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003492c0 TLS:<nil>}
I1202 15:21:58.186808  643687 retry.go:31] will retry after 1.332749ms: Temporary Error: unexpected response code: 503
I1202 15:21:58.191308  643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1497bb58-85f3-49e4-ab37-8fddca921685] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:21:58 GMT]] Body:0xc00081f200 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005ef400 TLS:<nil>}
I1202 15:21:58.191399  643687 retry.go:31] will retry after 2.988593ms: Temporary Error: unexpected response code: 503
I1202 15:21:58.197973  643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cae6e7d8-b390-4faf-9edc-6f061ce849bd] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:21:58 GMT]] Body:0xc000888a00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c0b40 TLS:<nil>}
I1202 15:21:58.198051  643687 retry.go:31] will retry after 8.489819ms: Temporary Error: unexpected response code: 503
I1202 15:21:58.210456  643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[24c85625-598d-413a-a936-249068a2e461] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:21:58 GMT]] Body:0xc00083c940 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c0c80 TLS:<nil>}
I1202 15:21:58.210537  643687 retry.go:31] will retry after 6.771642ms: Temporary Error: unexpected response code: 503
I1202 15:21:58.221053  643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[487addb1-ed41-460f-a342-b74a4c71e825] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:21:58 GMT]] Body:0xc000888d40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000349400 TLS:<nil>}
I1202 15:21:58.221133  643687 retry.go:31] will retry after 16.452845ms: Temporary Error: unexpected response code: 503
I1202 15:21:58.241309  643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[75d2b107-8078-4f51-80ca-85fdd2b78abf] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:21:58 GMT]] Body:0xc00081f380 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005ef540 TLS:<nil>}
I1202 15:21:58.241368  643687 retry.go:31] will retry after 19.353478ms: Temporary Error: unexpected response code: 503
I1202 15:21:58.264443  643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0d55cb24-72bf-4209-8ea0-a3c023cb8698] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:21:58 GMT]] Body:0xc000888e80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c0dc0 TLS:<nil>}
I1202 15:21:58.264517  643687 retry.go:31] will retry after 19.498497ms: Temporary Error: unexpected response code: 503
I1202 15:21:58.287373  643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0348b572-6ceb-433a-bb24-62201fbb1cc3] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:21:58 GMT]] Body:0xc00081f400 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005ef680 TLS:<nil>}
I1202 15:21:58.287441  643687 retry.go:31] will retry after 53.283125ms: Temporary Error: unexpected response code: 503
I1202 15:21:58.344688  643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e78d8ec2-757c-47c9-bc17-147d835359d3] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:21:58 GMT]] Body:0xc00083cac0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c0f00 TLS:<nil>}
I1202 15:21:58.344765  643687 retry.go:31] will retry after 84.174374ms: Temporary Error: unexpected response code: 503
I1202 15:21:58.432365  643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ed220eb1-ace8-4269-949a-5020b5cb8a06] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:21:58 GMT]] Body:0xc00083cb80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000349540 TLS:<nil>}
I1202 15:21:58.432440  643687 retry.go:31] will retry after 72.25084ms: Temporary Error: unexpected response code: 503
I1202 15:21:58.508949  643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6c052ca4-aa94-4f18-9d92-8b14f2a1e060] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:21:58 GMT]] Body:0xc000889000 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000349680 TLS:<nil>}
I1202 15:21:58.509023  643687 retry.go:31] will retry after 188.65018ms: Temporary Error: unexpected response code: 503
I1202 15:21:58.701349  643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[bc184844-b2ee-4cf2-a05c-803b8fc73dd9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:21:58 GMT]] Body:0xc00081f540 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005ef7c0 TLS:<nil>}
I1202 15:21:58.701411  643687 retry.go:31] will retry after 172.035007ms: Temporary Error: unexpected response code: 503
I1202 15:21:58.877164  643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ef425056-52d5-4ea9-b4fb-c7a490a125c8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:21:58 GMT]] Body:0xc00083cc40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c1040 TLS:<nil>}
I1202 15:21:58.877314  643687 retry.go:31] will retry after 389.554091ms: Temporary Error: unexpected response code: 503
I1202 15:21:59.270735  643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9a1b5076-ce3e-4820-bee7-0a6b00d869bc] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:21:59 GMT]] Body:0xc000889180 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003497c0 TLS:<nil>}
I1202 15:21:59.270819  643687 retry.go:31] will retry after 671.461659ms: Temporary Error: unexpected response code: 503
I1202 15:21:59.945535  643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[464969f7-6a46-4821-9749-3e630e306743] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:21:59 GMT]] Body:0xc00083cd40 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005efb80 TLS:<nil>}
I1202 15:21:59.945617  643687 retry.go:31] will retry after 860.771622ms: Temporary Error: unexpected response code: 503
I1202 15:22:00.810259  643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[88420ad2-b589-4a7c-885f-7b33475b2f34] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:22:00 GMT]] Body:0xc00081f640 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000349900 TLS:<nil>}
I1202 15:22:00.810339  643687 retry.go:31] will retry after 1.477272918s: Temporary Error: unexpected response code: 503
I1202 15:22:02.290778  643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e32c49d8-2057-45ce-b0e2-bac92b27de5d] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:22:02 GMT]] Body:0xc00083cec0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000349a40 TLS:<nil>}
I1202 15:22:02.290843  643687 retry.go:31] will retry after 1.15700599s: Temporary Error: unexpected response code: 503
I1202 15:22:03.451362  643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[70cadeb5-5337-40ce-8fe4-045e0b437b98] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:22:03 GMT]] Body:0xc0008892c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000349b80 TLS:<nil>}
I1202 15:22:03.451465  643687 retry.go:31] will retry after 2.292837912s: Temporary Error: unexpected response code: 503
I1202 15:22:05.747860  643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5403f731-f5a4-4cfc-be4d-17847a552164] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:22:05 GMT]] Body:0xc00081f6c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005efcc0 TLS:<nil>}
I1202 15:22:05.747923  643687 retry.go:31] will retry after 2.133191737s: Temporary Error: unexpected response code: 503
I1202 15:22:07.884356  643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[64bdadc9-32af-46e2-aa0d-ad39c921052b] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:22:07 GMT]] Body:0xc00081f740 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005efe00 TLS:<nil>}
I1202 15:22:07.884416  643687 retry.go:31] will retry after 6.650152813s: Temporary Error: unexpected response code: 503
I1202 15:22:14.539605  643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4799373e-67dd-47c4-8e77-7226976ad8bd] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:22:14 GMT]] Body:0xc000889500 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001cc280 TLS:<nil>}
I1202 15:22:14.539690  643687 retry.go:31] will retry after 11.218773495s: Temporary Error: unexpected response code: 503
I1202 15:22:25.763777  643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a8e926fb-b957-4b82-af40-61a4769d6cc9] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:22:25 GMT]] Body:0xc000889580 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c1180 TLS:<nil>}
I1202 15:22:25.763854  643687 retry.go:31] will retry after 6.934262968s: Temporary Error: unexpected response code: 503
I1202 15:22:32.701920  643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f20e6a4d-6c08-4bc8-a916-bc73d2f458fc] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:22:32 GMT]] Body:0xc00083cfc0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001cc3c0 TLS:<nil>}
I1202 15:22:32.702018  643687 retry.go:31] will retry after 22.295270073s: Temporary Error: unexpected response code: 503
I1202 15:22:55.000984  643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9b3df7fc-3edd-4ea5-93fc-b90fec0e7c99] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:22:55 GMT]] Body:0xc00081f8c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001cc500 TLS:<nil>}
I1202 15:22:55.001068  643687 retry.go:31] will retry after 27.795931633s: Temporary Error: unexpected response code: 503
I1202 15:23:22.801191  643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0678e711-6fc2-4600-9eb0-55d5d78a338a] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:23:22 GMT]] Body:0xc00081f980 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c12c0 TLS:<nil>}
I1202 15:23:22.801284  643687 retry.go:31] will retry after 1m1.074600245s: Temporary Error: unexpected response code: 503
I1202 15:24:23.879296  643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[719a025b-56de-4ae3-8692-8f29bd145296] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:24:23 GMT]] Body:0xc00081e040 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001cc640 TLS:<nil>}
I1202 15:24:23.879390  643687 retry.go:31] will retry after 39.781013294s: Temporary Error: unexpected response code: 503
I1202 15:25:03.664423  643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f436ab2f-d5a2-4673-a0a9-618604f48c79] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:25:03 GMT]] Body:0xc00078e0c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c0000 TLS:<nil>}
I1202 15:25:03.664499  643687 retry.go:31] will retry after 1m28.625534719s: Temporary Error: unexpected response code: 503
I1202 15:26:32.294277  643687 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[664a9203-f8e1-4f3b-a348-47d787d6c77f] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Tue, 02 Dec 2025 15:26:32 GMT]] Body:0xc00081e040 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c0140 TLS:<nil>}
I1202 15:26:32.294369  643687 retry.go:31] will retry after 34.082644645s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-169724
helpers_test.go:243: (dbg) docker inspect functional-169724:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6aca14b454067585a9f00028d5845488d973f184b936306a121375ca3fc8322e",
	        "Created": "2025-12-02T15:18:38.356956471Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 623776,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T15:18:38.392277615Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/6aca14b454067585a9f00028d5845488d973f184b936306a121375ca3fc8322e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6aca14b454067585a9f00028d5845488d973f184b936306a121375ca3fc8322e/hostname",
	        "HostsPath": "/var/lib/docker/containers/6aca14b454067585a9f00028d5845488d973f184b936306a121375ca3fc8322e/hosts",
	        "LogPath": "/var/lib/docker/containers/6aca14b454067585a9f00028d5845488d973f184b936306a121375ca3fc8322e/6aca14b454067585a9f00028d5845488d973f184b936306a121375ca3fc8322e-json.log",
	        "Name": "/functional-169724",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-169724:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-169724",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6aca14b454067585a9f00028d5845488d973f184b936306a121375ca3fc8322e",
	                "LowerDir": "/var/lib/docker/overlay2/01183119e5159d4abe8a85b62d0e6721d14eaec763519f5c1a1bd63f83b7ca62-init/diff:/var/lib/docker/overlay2/07ec335befb7b26acaacda7ed9253badae67627e1c23bce677fab65b2eb5425a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/01183119e5159d4abe8a85b62d0e6721d14eaec763519f5c1a1bd63f83b7ca62/merged",
	                "UpperDir": "/var/lib/docker/overlay2/01183119e5159d4abe8a85b62d0e6721d14eaec763519f5c1a1bd63f83b7ca62/diff",
	                "WorkDir": "/var/lib/docker/overlay2/01183119e5159d4abe8a85b62d0e6721d14eaec763519f5c1a1bd63f83b7ca62/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-169724",
	                "Source": "/var/lib/docker/volumes/functional-169724/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-169724",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-169724",
	                "name.minikube.sigs.k8s.io": "functional-169724",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "e2ce3823480529bac422fa191445f465825bae9fde6bbb6696f94ab8b9a30fe8",
	            "SandboxKey": "/var/run/docker/netns/e2ce38234805",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33185"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33188"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33186"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33187"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-169724": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "300c2391fbf7a793261c8a44888e7fc898256c3ddc3ed8c9dc4987126019541c",
	                    "EndpointID": "7cb161253f17eaa87e372155b28d897ea3742d100fc2c51b11787e5e14e7c0fa",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "6a:27:8d:3e:7f:68",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-169724",
	                        "6aca14b45406"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-169724 -n functional-169724
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-169724 logs -n 25: (1.080085311s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                            ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-169724 image ls --format yaml --alsologtostderr                                                                                                 │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
	│ image          │ functional-169724 image ls                                                                                                                                 │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
	│ image          │ functional-169724 image ls --format short --alsologtostderr                                                                                                │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │                     │
	│ cp             │ functional-169724 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                  │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
	│ image          │ functional-169724 image load --daemon kicbase/echo-server:functional-169724 --alsologtostderr                                                              │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
	│ ssh            │ functional-169724 ssh -n functional-169724 sudo cat /tmp/does/not/exist/cp-test.txt                                                                        │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
	│ image          │ functional-169724 image ls                                                                                                                                 │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
	│ image          │ functional-169724 image load --daemon kicbase/echo-server:functional-169724 --alsologtostderr                                                              │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
	│ update-context │ functional-169724 update-context --alsologtostderr -v=2                                                                                                    │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
	│ update-context │ functional-169724 update-context --alsologtostderr -v=2                                                                                                    │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
	│ update-context │ functional-169724 update-context --alsologtostderr -v=2                                                                                                    │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │                     │
	│ image          │ functional-169724 image ls                                                                                                                                 │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
	│ image          │ functional-169724 image save kicbase/echo-server:functional-169724 /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
	│ image          │ functional-169724 image rm kicbase/echo-server:functional-169724 --alsologtostderr                                                                         │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
	│ image          │ functional-169724 image ls                                                                                                                                 │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
	│ image          │ functional-169724 image load /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr                                       │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
	│ image          │ functional-169724 image ls                                                                                                                                 │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
	│ image          │ functional-169724 image save --daemon kicbase/echo-server:functional-169724 --alsologtostderr                                                              │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
	│ image          │ functional-169724 image ls --format yaml --alsologtostderr                                                                                                 │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │                     │
	│ image          │ functional-169724 image ls --format short --alsologtostderr                                                                                                │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
	│ ssh            │ functional-169724 ssh pgrep buildkitd                                                                                                                      │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │                     │
	│ image          │ functional-169724 image ls --format json --alsologtostderr                                                                                                 │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
	│ image          │ functional-169724 image ls --format table --alsologtostderr                                                                                                │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
	│ image          │ functional-169724 image build -t localhost/my-image:functional-169724 testdata/build --alsologtostderr                                                     │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
	│ image          │ functional-169724 image ls                                                                                                                                 │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 15:21:47
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 15:21:47.314586  641068 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:21:47.314684  641068 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:21:47.314692  641068 out.go:374] Setting ErrFile to fd 2...
	I1202 15:21:47.314696  641068 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:21:47.314930  641068 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-563346/.minikube/bin
	I1202 15:21:47.315423  641068 out.go:368] Setting JSON to false
	I1202 15:21:47.316714  641068 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":7456,"bootTime":1764681451,"procs":241,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 15:21:47.316780  641068 start.go:143] virtualization: kvm guest
	I1202 15:21:47.318463  641068 out.go:179] * [functional-169724] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 15:21:47.319518  641068 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 15:21:47.319534  641068 notify.go:221] Checking for updates...
	I1202 15:21:47.322273  641068 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 15:21:47.323401  641068 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-563346/kubeconfig
	I1202 15:21:47.324434  641068 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-563346/.minikube
	I1202 15:21:47.325717  641068 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 15:21:47.327279  641068 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 15:21:47.329086  641068 config.go:182] Loaded profile config "functional-169724": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1202 15:21:47.329971  641068 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 15:21:47.357518  641068 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 15:21:47.357625  641068 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 15:21:47.421491  641068 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-02 15:21:47.409816405 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 15:21:47.421613  641068 docker.go:319] overlay module found
	I1202 15:21:47.423070  641068 out.go:179] * Using the docker driver based on existing profile
	I1202 15:21:47.424115  641068 start.go:309] selected driver: docker
	I1202 15:21:47.424136  641068 start.go:927] validating driver "docker" against &{Name:functional-169724 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-169724 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 15:21:47.424267  641068 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 15:21:47.424387  641068 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 15:21:47.481154  641068 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-02 15:21:47.471959518 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 15:21:47.481877  641068 cni.go:84] Creating CNI manager for ""
	I1202 15:21:47.481949  641068 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1202 15:21:47.482010  641068 start.go:353] cluster config:
	{Name:functional-169724 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-169724 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 15:21:47.483575  641068 out.go:179] * dry-run validation complete!
	
	
	==> Docker <==
	Dec 02 15:21:59 functional-169724 dockerd[7755]: time="2025-12-02T15:21:59.403585379Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 02 15:21:59 functional-169724 dockerd[7755]: time="2025-12-02T15:21:59.512715955Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 02 15:22:04 functional-169724 dockerd[7755]: time="2025-12-02T15:22:04.189605532Z" level=info msg="ignoring event" container=c5d7a555311c73af0dd6785b30aed59022734dad3efecbe1b29afbb99054221b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 02 15:22:04 functional-169724 dockerd[7755]: time="2025-12-02T15:22:04.357591630Z" level=info msg="ignoring event" container=6557df140b51fb2026fd7881cfc0f05e861ca1d97f10929fea6f284a83907449 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 02 15:22:05 functional-169724 dockerd[7755]: time="2025-12-02T15:22:05.467560101Z" level=info msg="sbJoin: gwep4 ''->'', gwep6 ''->''" eid=b32f1c40ba5d ep=k8s_POD_sp-pod_default_d8340d35-b3d6-4326-b084-210d089189c8_0 net=none nid=41dbb18abfc2
	Dec 02 15:22:05 functional-169724 cri-dockerd[8507]: time="2025-12-02T15:22:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/cb749a039999dd2401a723e0c1e60d5a92994ce685ff47c06d4aef5217e81059/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Dec 02 15:22:05 functional-169724 cri-dockerd[8507]: time="2025-12-02T15:22:05Z" level=info msg="Stop pulling image docker.io/nginx:latest: Status: Image is up to date for nginx:latest"
	Dec 02 15:22:12 functional-169724 dockerd[7755]: time="2025-12-02T15:22:12.297492774Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 02 15:22:13 functional-169724 dockerd[7755]: time="2025-12-02T15:22:13.199739095Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 02 15:22:13 functional-169724 dockerd[7755]: time="2025-12-02T15:22:13.231312464Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 02 15:22:17 functional-169724 dockerd[7755]: 2025/12/02 15:22:17 http2: server: error reading preface from client @: read unix /var/run/docker.sock->@: read: connection reset by peer
	Dec 02 15:22:18 functional-169724 dockerd[7755]: time="2025-12-02T15:22:18.824698620Z" level=info msg="sbJoin: gwep4 ''->'4689289d919b', gwep6 ''->''"
	Dec 02 15:22:30 functional-169724 cri-dockerd[8507]: time="2025-12-02T15:22:30Z" level=error msg="error getting RW layer size for container ID '981a49c19bbfceeb360f21f717e70de1549aecb3102ef304675c9c5f199d96d8': Error response from daemon: No such container: 981a49c19bbfceeb360f21f717e70de1549aecb3102ef304675c9c5f199d96d8"
	Dec 02 15:22:30 functional-169724 cri-dockerd[8507]: time="2025-12-02T15:22:30Z" level=error msg="Set backoffDuration to : 1m0s for container ID '981a49c19bbfceeb360f21f717e70de1549aecb3102ef304675c9c5f199d96d8'"
	Dec 02 15:22:38 functional-169724 dockerd[7755]: time="2025-12-02T15:22:38.200382357Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 02 15:22:38 functional-169724 dockerd[7755]: time="2025-12-02T15:22:38.292599059Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 02 15:22:38 functional-169724 cri-dockerd[8507]: time="2025-12-02T15:22:38Z" level=info msg="Stop pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: Pulling from kubernetesui/dashboard"
	Dec 02 15:22:39 functional-169724 dockerd[7755]: time="2025-12-02T15:22:39.272473055Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 02 15:23:27 functional-169724 dockerd[7755]: time="2025-12-02T15:23:27.281691411Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 02 15:23:31 functional-169724 dockerd[7755]: time="2025-12-02T15:23:31.198964490Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 02 15:23:31 functional-169724 dockerd[7755]: time="2025-12-02T15:23:31.235678685Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 02 15:24:55 functional-169724 dockerd[7755]: time="2025-12-02T15:24:55.200258483Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 02 15:24:55 functional-169724 dockerd[7755]: time="2025-12-02T15:24:55.295911943Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 02 15:24:55 functional-169724 cri-dockerd[8507]: time="2025-12-02T15:24:55Z" level=info msg="Stop pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: Pulling from kubernetesui/dashboard"
	Dec 02 15:24:57 functional-169724 dockerd[7755]: time="2025-12-02T15:24:57.274599396Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                  CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	20584dd03c3ce       nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42                          4 minutes ago       Running             myfrontend                  0                   cb749a039999d       sp-pod                                       default
	c092a076c571e       kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c   4 minutes ago       Running             dashboard-metrics-scraper   0                   4cc12c5b503a8       dashboard-metrics-scraper-5565989548-zhkml   kubernetes-dashboard
	c039b5e9295d1       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6            5 minutes ago       Running             echo-server                 0                   ed459e09aa4d8       hello-node-5758569b79-l9dzq                  default
	67e69e5288bb6       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6            5 minutes ago       Running             echo-server                 0                   4f35bc7e21ab4       hello-node-connect-9f67c86d4-j55l8           default
	8cf8bb66fc675       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e    5 minutes ago       Exited              mount-munger                0                   87dc69f17ae61       busybox-mount                                default
	0cb8e8eb6c98c       nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14                          5 minutes ago       Running             nginx                       0                   37aab5a80f061       nginx-svc                                    default
	4c5dab5ebf4f4       aa5e3ebc0dfed                                                                                          5 minutes ago       Running             coredns                     2                   2aee0a7ebd601       coredns-7d764666f9-nd8zq                     kube-system
	ba9422ce22e85       8a4ded35a3eb1                                                                                          5 minutes ago       Running             kube-proxy                  3                   5e0a4a8d35c60       kube-proxy-d9lr9                             kube-system
	6d5ae1ae06bc8       6e38f40d628db                                                                                          5 minutes ago       Running             storage-provisioner         4                   4324084172f5b       storage-provisioner                          kube-system
	97abb5500abc1       7bb6219ddab95                                                                                          5 minutes ago       Running             kube-scheduler              3                   acb10ef6ba257       kube-scheduler-functional-169724             kube-system
	be942be84ea2e       45f3cc72d235f                                                                                          5 minutes ago       Running             kube-controller-manager     3                   8575ae5201a0a       kube-controller-manager-functional-169724    kube-system
	11d06fb246a49       a3e246e9556e9                                                                                          5 minutes ago       Running             etcd                        2                   6f5be81b7f3fb       etcd-functional-169724                       kube-system
	29cc4a6e2e615       aa9d02839d8de                                                                                          5 minutes ago       Running             kube-apiserver              0                   5fee737043a1b       kube-apiserver-functional-169724             kube-system
	7edd953743733       8a4ded35a3eb1                                                                                          5 minutes ago       Exited              kube-proxy                  2                   96b84bc5192e0       kube-proxy-d9lr9                             kube-system
	cdcf8eef29831       45f3cc72d235f                                                                                          5 minutes ago       Exited              kube-controller-manager     2                   e6f69ca09701e       kube-controller-manager-functional-169724    kube-system
	71e80cbfbdd27       7bb6219ddab95                                                                                          5 minutes ago       Exited              kube-scheduler              2                   f8f59ee5d6caf       kube-scheduler-functional-169724             kube-system
	55b0ea4b79d49       6e38f40d628db                                                                                          6 minutes ago       Created             storage-provisioner         3                   4adb513a00693       storage-provisioner                          kube-system
	1d0a06c5343ab       aa5e3ebc0dfed                                                                                          6 minutes ago       Exited              coredns                     1                   209d9310937be       coredns-7d764666f9-nd8zq                     kube-system
	e44038096d500       a3e246e9556e9                                                                                          6 minutes ago       Exited              etcd                        1                   f7e26095932d1       etcd-functional-169724                       kube-system
	
	
	==> coredns [1d0a06c5343a] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:41181 - 48782 "HINFO IN 8599916324902243977.3856247845758161228. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.021557542s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [4c5dab5ebf4f] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:42127 - 12039 "HINFO IN 2776022828749691506.5150390252904704739. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.046412032s
	
	
	==> describe nodes <==
	Name:               functional-169724
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-169724
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689
	                    minikube.k8s.io/name=functional-169724
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T15_19_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 15:18:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-169724
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 15:26:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 15:25:58 +0000   Tue, 02 Dec 2025 15:18:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 15:25:58 +0000   Tue, 02 Dec 2025 15:18:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 15:25:58 +0000   Tue, 02 Dec 2025 15:18:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 15:25:58 +0000   Tue, 02 Dec 2025 15:19:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-169724
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                63b0e81a-4f10-411a-8755-281b0479e5a4
	  Boot ID:                    bd6d4341-b6ad-469b-96fd-32b547c9d299
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://29.0.4
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-5758569b79-l9dzq                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m4s
	  default                     hello-node-connect-9f67c86d4-j55l8            0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m11s
	  default                     mysql-844cf969f6-dbl2r                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     5m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m12s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  kube-system                 coredns-7d764666f9-nd8zq                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     7m51s
	  kube-system                 etcd-functional-169724                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7m57s
	  kube-system                 kube-apiserver-functional-169724              250m (3%)     0 (0%)      0 (0%)           0 (0%)         5m35s
	  kube-system                 kube-controller-manager-functional-169724     200m (2%)     0 (0%)      0 (0%)           0 (0%)         7m57s
	  kube-system                 kube-proxy-d9lr9                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m51s
	  kube-system                 kube-scheduler-functional-169724              100m (1%)     0 (0%)      0 (0%)           0 (0%)         7m57s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m51s
	  kubernetes-dashboard        dashboard-metrics-scraper-5565989548-zhkml    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-z9ghp          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (16%)  700m (8%)
	  memory             682Mi (2%)   870Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  7m52s  node-controller  Node functional-169724 event: Registered Node functional-169724 in Controller
	  Normal  RegisteredNode  6m37s  node-controller  Node functional-169724 event: Registered Node functional-169724 in Controller
	  Normal  RegisteredNode  5m33s  node-controller  Node functional-169724 event: Registered Node functional-169724 in Controller
	
	
	==> dmesg <==
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff aa c4 8b 72 23 67 08 06
	[Dec 2 15:12] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ce 33 66 cf 68 fe 08 06
	[  +0.000567] IPv4: martian source 10.244.0.32 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa c4 8b 72 23 67 08 06
	[  +0.000756] IPv4: martian source 10.244.0.32 from 10.244.0.7, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 fc 13 70 ef 7a 08 06
	[Dec 2 15:13] IPv4: martian source 10.244.0.31 from 10.244.0.25, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 6c d3 f0 2d 4b 08 06
	[Dec 2 15:15] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3e e9 14 7c 53 c5 08 06
	[Dec 2 15:16] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c6 9e 41 2b 99 a9 08 06
	[Dec 2 15:17] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000012] ll header: 00000000: ff ff ff ff ff ff 3e 17 2b 55 09 b0 08 06
	[Dec 2 15:18] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 1a 5d a4 9c b5 12 08 06
	[Dec 2 15:19] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9e 1e 7c 51 67 ed 08 06
	[  +0.136746] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ce f0 27 e7 63 6f 08 06
	[Dec 2 15:20] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 66 64 43 d3 ea d3 08 06
	[Dec 2 15:21] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e d7 73 53 48 b3 08 06
	
	
	==> etcd [11d06fb246a4] <==
	{"level":"warn","ts":"2025-12-02T15:21:21.911190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:21:21.921919Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:21:21.929780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:21:21.937421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:21:21.944962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:21:21.953046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:21:21.959409Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:21:21.965653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:21:21.973362Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:21:21.983357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:21:21.989862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:21:21.996721Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:21:22.003414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:21:22.010147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:21:22.017337Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:21:22.024086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:21:22.030647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:21:22.037289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:21:22.044122Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:21:22.056084Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:21:22.068845Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:21:22.075260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:21:22.081372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:21:22.088112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:21:22.131529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54148","server-name":"","error":"EOF"}
	
	
	==> etcd [e44038096d50] <==
	{"level":"warn","ts":"2025-12-02T15:20:18.102944Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:20:18.110240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:20:18.130716Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:20:18.137543Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:20:18.145240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:20:18.152060Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:20:18.197540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57900","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-02T15:21:06.533986Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-02T15:21:06.534077Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-169724","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-12-02T15:21:06.534253Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-02T15:21:13.535652Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-02T15:21:13.535753Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-02T15:21:13.535814Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-12-02T15:21:13.535886Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-02T15:21:13.535942Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-02T15:21:13.535954Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-02T15:21:13.536036Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-02T15:21:13.536046Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-02T15:21:13.536115Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-02T15:21:13.536141Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-02T15:21:13.536150Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-02T15:21:13.539423Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-12-02T15:21:13.539485Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-02T15:21:13.539515Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-12-02T15:21:13.539521Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-169724","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 15:26:58 up  2:09,  0 user,  load average: 0.07, 0.40, 1.25
	Linux functional-169724 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [29cc4a6e2e61] <==
	I1202 15:21:22.633196       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1202 15:21:22.638620       1 shared_informer.go:377] "Caches are synced"
	I1202 15:21:22.638638       1 policy_source.go:248] refreshing policies
	I1202 15:21:22.640385       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1202 15:21:23.245488       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1202 15:21:23.245487       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1202 15:21:23.245488       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1202 15:21:23.488408       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1202 15:21:24.334953       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1202 15:21:24.371368       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1202 15:21:24.398800       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1202 15:21:24.404715       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1202 15:21:25.966967       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1202 15:21:26.017714       1 controller.go:667] quota admission added evaluator for: endpoints
	I1202 15:21:40.039541       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.110.181.51"}
	I1202 15:21:46.683320       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.102.106.161"}
	I1202 15:21:47.549397       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1202 15:21:47.631670       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.99.130.65"}
	I1202 15:21:55.079754       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.108.134.79"}
	I1202 15:21:57.923307       1 controller.go:667] quota admission added evaluator for: namespaces
	I1202 15:21:58.062074       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.110.134"}
	I1202 15:21:58.073491       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.118.166"}
	I1202 15:21:58.545355       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.107.51.162"}
	E1202 15:22:04.080451       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:52156: use of closed network connection
	E1202 15:22:11.189951       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:39652: use of closed network connection
	
	
	==> kube-controller-manager [be942be84ea2] <==
	I1202 15:21:25.726929       1 shared_informer.go:377] "Caches are synced"
	I1202 15:21:25.728133       1 shared_informer.go:377] "Caches are synced"
	I1202 15:21:25.728337       1 shared_informer.go:377] "Caches are synced"
	I1202 15:21:25.728514       1 shared_informer.go:377] "Caches are synced"
	I1202 15:21:25.728396       1 shared_informer.go:377] "Caches are synced"
	I1202 15:21:25.728642       1 shared_informer.go:377] "Caches are synced"
	I1202 15:21:25.728743       1 shared_informer.go:377] "Caches are synced"
	I1202 15:21:25.728786       1 shared_informer.go:377] "Caches are synced"
	I1202 15:21:25.728826       1 shared_informer.go:377] "Caches are synced"
	I1202 15:21:25.729297       1 shared_informer.go:377] "Caches are synced"
	I1202 15:21:25.728920       1 shared_informer.go:377] "Caches are synced"
	I1202 15:21:25.728944       1 shared_informer.go:377] "Caches are synced"
	I1202 15:21:25.728940       1 shared_informer.go:377] "Caches are synced"
	I1202 15:21:25.730174       1 shared_informer.go:377] "Caches are synced"
	I1202 15:21:25.731415       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 15:21:25.826011       1 shared_informer.go:377] "Caches are synced"
	I1202 15:21:25.826041       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1202 15:21:25.826046       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1202 15:21:25.831694       1 shared_informer.go:377] "Caches are synced"
	E1202 15:21:57.984423       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:21:57.989421       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:21:57.993493       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:21:57.993617       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:21:57.999312       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:21:58.004413       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [cdcf8eef2983] <==
	I1202 15:21:19.058440       1 serving.go:386] Generated self-signed cert in-memory
	I1202 15:21:19.065566       1 controllermanager.go:189] "Starting" version="v1.35.0-beta.0"
	I1202 15:21:19.065592       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 15:21:19.067249       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1202 15:21:19.067342       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1202 15:21:19.067442       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1202 15:21:19.067513       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	
	
	==> kube-proxy [7edd95374373] <==
	I1202 15:21:18.848250       1 server_linux.go:53] "Using iptables proxy"
	I1202 15:21:18.932703       1 shared_informer.go:370] "Waiting for caches to sync"
	
	
	==> kube-proxy [ba9422ce22e8] <==
	I1202 15:21:23.842878       1 server_linux.go:53] "Using iptables proxy"
	I1202 15:21:23.905118       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 15:21:24.005691       1 shared_informer.go:377] "Caches are synced"
	I1202 15:21:24.005731       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1202 15:21:24.005857       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 15:21:24.028724       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 15:21:24.028780       1 server_linux.go:136] "Using iptables Proxier"
	I1202 15:21:24.034388       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 15:21:24.034689       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1202 15:21:24.034705       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 15:21:24.035812       1 config.go:309] "Starting node config controller"
	I1202 15:21:24.035872       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 15:21:24.035884       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 15:21:24.035931       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 15:21:24.035944       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 15:21:24.036053       1 config.go:106] "Starting endpoint slice config controller"
	I1202 15:21:24.036100       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 15:21:24.036053       1 config.go:200] "Starting service config controller"
	I1202 15:21:24.036153       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 15:21:24.136338       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1202 15:21:24.136394       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 15:21:24.136366       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [71e80cbfbdd2] <==
	I1202 15:21:18.927381       1 serving.go:386] Generated self-signed cert in-memory
	W1202 15:21:18.930266       1 authentication.go:397] Error looking up in-cluster authentication configuration: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.49.2:8441: connect: connection refused
	W1202 15:21:18.930311       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1202 15:21:18.930323       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1202 15:21:18.940239       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1202 15:21:18.940272       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 15:21:18.942264       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 15:21:18.942303       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 15:21:18.942471       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1202 15:21:18.942618       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1202 15:21:19.180778       1 server.go:286] "handlers are not fully synchronized" err="context canceled"
	I1202 15:21:19.181243       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1202 15:21:19.181284       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1202 15:21:19.181320       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E1202 15:21:19.181366       1 shared_informer.go:373] "Unable to sync caches" logger="UnhandledError"
	I1202 15:21:19.181379       1 configmap_cafile_content.go:213] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 15:21:19.181425       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1202 15:21:19.181693       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [97abb5500abc] <==
	I1202 15:21:22.037813       1 serving.go:386] Generated self-signed cert in-memory
	W1202 15:21:22.524983       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1202 15:21:22.525288       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1202 15:21:22.525444       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1202 15:21:22.525565       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1202 15:21:22.546981       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1202 15:21:22.547008       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 15:21:22.549051       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 15:21:22.549078       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 15:21:22.549233       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1202 15:21:22.549266       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1202 15:21:22.650016       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 02 15:25:42 functional-169724 kubelet[9712]: E1202 15:25:42.180701    9712 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-169724" containerName="etcd"
	Dec 02 15:25:44 functional-169724 kubelet[9712]: E1202 15:25:44.182969    9712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-dbl2r" podUID="a5b42979-2765-4023-afe8-d83c7d58c712"
	Dec 02 15:25:47 functional-169724 kubelet[9712]: E1202 15:25:47.180056    9712 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-z9ghp" containerName="kubernetes-dashboard"
	Dec 02 15:25:47 functional-169724 kubelet[9712]: E1202 15:25:47.182477    9712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-z9ghp" podUID="7f04674d-ee7f-47b3-a9cb-9b205a1ddcd4"
	Dec 02 15:25:55 functional-169724 kubelet[9712]: E1202 15:25:55.182641    9712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-dbl2r" podUID="a5b42979-2765-4023-afe8-d83c7d58c712"
	Dec 02 15:26:02 functional-169724 kubelet[9712]: E1202 15:26:02.179662    9712 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-169724" containerName="kube-scheduler"
	Dec 02 15:26:02 functional-169724 kubelet[9712]: E1202 15:26:02.179788    9712 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-z9ghp" containerName="kubernetes-dashboard"
	Dec 02 15:26:02 functional-169724 kubelet[9712]: E1202 15:26:02.182389    9712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-z9ghp" podUID="7f04674d-ee7f-47b3-a9cb-9b205a1ddcd4"
	Dec 02 15:26:07 functional-169724 kubelet[9712]: E1202 15:26:07.179872    9712 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-zhkml" containerName="dashboard-metrics-scraper"
	Dec 02 15:26:09 functional-169724 kubelet[9712]: E1202 15:26:09.184047    9712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-dbl2r" podUID="a5b42979-2765-4023-afe8-d83c7d58c712"
	Dec 02 15:26:11 functional-169724 kubelet[9712]: E1202 15:26:11.180477    9712 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-169724" containerName="kube-apiserver"
	Dec 02 15:26:17 functional-169724 kubelet[9712]: E1202 15:26:17.180264    9712 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-z9ghp" containerName="kubernetes-dashboard"
	Dec 02 15:26:17 functional-169724 kubelet[9712]: E1202 15:26:17.183376    9712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-z9ghp" podUID="7f04674d-ee7f-47b3-a9cb-9b205a1ddcd4"
	Dec 02 15:26:19 functional-169724 kubelet[9712]: E1202 15:26:19.179711    9712 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-169724" containerName="kube-controller-manager"
	Dec 02 15:26:20 functional-169724 kubelet[9712]: E1202 15:26:20.185742    9712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-dbl2r" podUID="a5b42979-2765-4023-afe8-d83c7d58c712"
	Dec 02 15:26:32 functional-169724 kubelet[9712]: E1202 15:26:32.179772    9712 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-z9ghp" containerName="kubernetes-dashboard"
	Dec 02 15:26:32 functional-169724 kubelet[9712]: E1202 15:26:32.182145    9712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-z9ghp" podUID="7f04674d-ee7f-47b3-a9cb-9b205a1ddcd4"
	Dec 02 15:26:33 functional-169724 kubelet[9712]: E1202 15:26:33.182378    9712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-dbl2r" podUID="a5b42979-2765-4023-afe8-d83c7d58c712"
	Dec 02 15:26:41 functional-169724 kubelet[9712]: E1202 15:26:41.180025    9712 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-nd8zq" containerName="coredns"
	Dec 02 15:26:44 functional-169724 kubelet[9712]: E1202 15:26:44.193922    9712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-dbl2r" podUID="a5b42979-2765-4023-afe8-d83c7d58c712"
	Dec 02 15:26:47 functional-169724 kubelet[9712]: E1202 15:26:47.179560    9712 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-z9ghp" containerName="kubernetes-dashboard"
	Dec 02 15:26:47 functional-169724 kubelet[9712]: E1202 15:26:47.182080    9712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-z9ghp" podUID="7f04674d-ee7f-47b3-a9cb-9b205a1ddcd4"
	Dec 02 15:26:57 functional-169724 kubelet[9712]: E1202 15:26:57.182605    9712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-dbl2r" podUID="a5b42979-2765-4023-afe8-d83c7d58c712"
	Dec 02 15:26:58 functional-169724 kubelet[9712]: E1202 15:26:58.180086    9712 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-z9ghp" containerName="kubernetes-dashboard"
	Dec 02 15:26:58 functional-169724 kubelet[9712]: E1202 15:26:58.182489    9712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-z9ghp" podUID="7f04674d-ee7f-47b3-a9cb-9b205a1ddcd4"
	
	
	==> storage-provisioner [55b0ea4b79d4] <==
	
	
	==> storage-provisioner [6d5ae1ae06bc] <==
	W1202 15:26:32.404371       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:26:34.407799       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:26:34.411509       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:26:36.414750       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:26:36.419288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:26:38.422796       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:26:38.428146       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:26:40.431203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:26:40.435145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:26:42.438283       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:26:42.443630       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:26:44.447172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:26:44.451264       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:26:46.454750       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:26:46.461705       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:26:48.464838       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:26:48.468625       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:26:50.471998       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:26:50.475957       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:26:52.479215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:26:52.484274       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:26:54.487584       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:26:54.491806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:26:56.495391       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:26:56.499456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-169724 -n functional-169724
helpers_test.go:269: (dbg) Run:  kubectl --context functional-169724 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount mysql-844cf969f6-dbl2r kubernetes-dashboard-b84665fb8-z9ghp
helpers_test.go:282: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-169724 describe pod busybox-mount mysql-844cf969f6-dbl2r kubernetes-dashboard-b84665fb8-z9ghp
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-169724 describe pod busybox-mount mysql-844cf969f6-dbl2r kubernetes-dashboard-b84665fb8-z9ghp: exit status 1 (73.469356ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-169724/192.168.49.2
	Start Time:       Tue, 02 Dec 2025 15:21:46 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  mount-munger:
	    Container ID:  docker://8cf8bb66fc675c3aadf1d2407e671757956904149a59af8e6fbd98ead10f012c
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Tue, 02 Dec 2025 15:21:50 +0000
	      Finished:     Tue, 02 Dec 2025 15:21:50 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5cbd6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-5cbd6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m12s  default-scheduler  Successfully assigned default/busybox-mount to functional-169724
	  Normal  Pulling    5m11s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m8s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.375s (3.098s including waiting). Image size: 4403845 bytes.
	  Normal  Created    5m8s   kubelet            Container created
	  Normal  Started    5m8s   kubelet            Container started
	
	
	Name:             mysql-844cf969f6-dbl2r
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-169724/192.168.49.2
	Start Time:       Tue, 02 Dec 2025 15:21:58 +0000
	Labels:           app=mysql
	                  pod-template-hash=844cf969f6
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.16
	IPs:
	  IP:           10.244.0.16
	Controlled By:  ReplicaSet/mysql-844cf969f6
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2klr6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-2klr6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  5m                    default-scheduler  Successfully assigned default/mysql-844cf969f6-dbl2r to functional-169724
	  Normal   Pulling    2m1s (x5 over 4m59s)  kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     2m1s (x5 over 4m59s)  kubelet            Failed to pull image "docker.io/mysql:5.7": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m1s (x5 over 4m59s)  kubelet            Error: ErrImagePull
	  Warning  Failed     74s (x15 over 4m59s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    14s (x20 over 4m59s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-z9ghp" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-169724 describe pod busybox-mount mysql-844cf969f6-dbl2r kubernetes-dashboard-b84665fb8-z9ghp: exit status 1
E1202 15:27:55.507071  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/functional-049660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:28:23.210141  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/functional-049660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:31:48.809748  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/addons-029941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (302.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (602.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-169724 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-844cf969f6-dbl2r" [a5b42979-2765-4023-afe8-d83c7d58c712] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
functional_test.go:1804: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1804: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-169724 -n functional-169724
functional_test.go:1804: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: showing logs for failed pods as of 2025-12-02 15:31:58.925723217 +0000 UTC m=+1319.692815989
functional_test.go:1804: (dbg) Run:  kubectl --context functional-169724 describe po mysql-844cf969f6-dbl2r -n default
functional_test.go:1804: (dbg) kubectl --context functional-169724 describe po mysql-844cf969f6-dbl2r -n default:
Name:             mysql-844cf969f6-dbl2r
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-169724/192.168.49.2
Start Time:       Tue, 02 Dec 2025 15:21:58 +0000
Labels:           app=mysql
pod-template-hash=844cf969f6
Annotations:      <none>
Status:           Pending
IP:               10.244.0.16
IPs:
IP:           10.244.0.16
Controlled By:  ReplicaSet/mysql-844cf969f6
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP (mysql)
Host Port:      0/TCP (mysql)
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2klr6 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-2klr6:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/mysql-844cf969f6-dbl2r to functional-169724
Normal   Pulling    7m1s (x5 over 9m59s)    kubelet            Pulling image "docker.io/mysql:5.7"
Warning  Failed     7m1s (x5 over 9m59s)    kubelet            Failed to pull image "docker.io/mysql:5.7": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     7m1s (x5 over 9m59s)    kubelet            Error: ErrImagePull
Normal   BackOff    4m46s (x22 over 9m59s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
Warning  Failed     4m46s (x22 over 9m59s)  kubelet            Error: ImagePullBackOff
functional_test.go:1804: (dbg) Run:  kubectl --context functional-169724 logs mysql-844cf969f6-dbl2r -n default
functional_test.go:1804: (dbg) Non-zero exit: kubectl --context functional-169724 logs mysql-844cf969f6-dbl2r -n default: exit status 1 (75.711907ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-844cf969f6-dbl2r" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1804: kubectl --context functional-169724 logs mysql-844cf969f6-dbl2r -n default: exit status 1
functional_test.go:1806: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-169724
helpers_test.go:243: (dbg) docker inspect functional-169724:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6aca14b454067585a9f00028d5845488d973f184b936306a121375ca3fc8322e",
	        "Created": "2025-12-02T15:18:38.356956471Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 623776,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T15:18:38.392277615Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1d5bf317f755cf68e91d0ebb61ffb5a29589825b974c7e2b25db20af78120fde",
	        "ResolvConfPath": "/var/lib/docker/containers/6aca14b454067585a9f00028d5845488d973f184b936306a121375ca3fc8322e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6aca14b454067585a9f00028d5845488d973f184b936306a121375ca3fc8322e/hostname",
	        "HostsPath": "/var/lib/docker/containers/6aca14b454067585a9f00028d5845488d973f184b936306a121375ca3fc8322e/hosts",
	        "LogPath": "/var/lib/docker/containers/6aca14b454067585a9f00028d5845488d973f184b936306a121375ca3fc8322e/6aca14b454067585a9f00028d5845488d973f184b936306a121375ca3fc8322e-json.log",
	        "Name": "/functional-169724",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-169724:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-169724",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6aca14b454067585a9f00028d5845488d973f184b936306a121375ca3fc8322e",
	                "LowerDir": "/var/lib/docker/overlay2/01183119e5159d4abe8a85b62d0e6721d14eaec763519f5c1a1bd63f83b7ca62-init/diff:/var/lib/docker/overlay2/07ec335befb7b26acaacda7ed9253badae67627e1c23bce677fab65b2eb5425a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/01183119e5159d4abe8a85b62d0e6721d14eaec763519f5c1a1bd63f83b7ca62/merged",
	                "UpperDir": "/var/lib/docker/overlay2/01183119e5159d4abe8a85b62d0e6721d14eaec763519f5c1a1bd63f83b7ca62/diff",
	                "WorkDir": "/var/lib/docker/overlay2/01183119e5159d4abe8a85b62d0e6721d14eaec763519f5c1a1bd63f83b7ca62/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-169724",
	                "Source": "/var/lib/docker/volumes/functional-169724/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-169724",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-169724",
	                "name.minikube.sigs.k8s.io": "functional-169724",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "e2ce3823480529bac422fa191445f465825bae9fde6bbb6696f94ab8b9a30fe8",
	            "SandboxKey": "/var/run/docker/netns/e2ce38234805",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33185"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33188"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33186"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33187"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-169724": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "300c2391fbf7a793261c8a44888e7fc898256c3ddc3ed8c9dc4987126019541c",
	                    "EndpointID": "7cb161253f17eaa87e372155b28d897ea3742d100fc2c51b11787e5e14e7c0fa",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "6a:27:8d:3e:7f:68",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-169724",
	                        "6aca14b45406"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-169724 -n functional-169724
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-169724 logs -n 25: (1.029958517s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                            ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-169724 image ls --format yaml --alsologtostderr                                                                                                 │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
	│ image          │ functional-169724 image ls                                                                                                                                 │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
	│ image          │ functional-169724 image ls --format short --alsologtostderr                                                                                                │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │                     │
	│ cp             │ functional-169724 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                  │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
	│ image          │ functional-169724 image load --daemon kicbase/echo-server:functional-169724 --alsologtostderr                                                              │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
	│ ssh            │ functional-169724 ssh -n functional-169724 sudo cat /tmp/does/not/exist/cp-test.txt                                                                        │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
	│ image          │ functional-169724 image ls                                                                                                                                 │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
	│ image          │ functional-169724 image load --daemon kicbase/echo-server:functional-169724 --alsologtostderr                                                              │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
	│ update-context │ functional-169724 update-context --alsologtostderr -v=2                                                                                                    │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
	│ update-context │ functional-169724 update-context --alsologtostderr -v=2                                                                                                    │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
	│ update-context │ functional-169724 update-context --alsologtostderr -v=2                                                                                                    │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │                     │
	│ image          │ functional-169724 image ls                                                                                                                                 │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
	│ image          │ functional-169724 image save kicbase/echo-server:functional-169724 /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
	│ image          │ functional-169724 image rm kicbase/echo-server:functional-169724 --alsologtostderr                                                                         │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
	│ image          │ functional-169724 image ls                                                                                                                                 │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
	│ image          │ functional-169724 image load /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr                                       │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
	│ image          │ functional-169724 image ls                                                                                                                                 │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
	│ image          │ functional-169724 image save --daemon kicbase/echo-server:functional-169724 --alsologtostderr                                                              │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
	│ image          │ functional-169724 image ls --format yaml --alsologtostderr                                                                                                 │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │                     │
	│ image          │ functional-169724 image ls --format short --alsologtostderr                                                                                                │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
	│ ssh            │ functional-169724 ssh pgrep buildkitd                                                                                                                      │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │                     │
	│ image          │ functional-169724 image ls --format json --alsologtostderr                                                                                                 │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
	│ image          │ functional-169724 image ls --format table --alsologtostderr                                                                                                │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
	│ image          │ functional-169724 image build -t localhost/my-image:functional-169724 testdata/build --alsologtostderr                                                     │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
	│ image          │ functional-169724 image ls                                                                                                                                 │ functional-169724 │ jenkins │ v1.37.0 │ 02 Dec 25 15:22 UTC │ 02 Dec 25 15:22 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 15:21:47
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 15:21:47.314586  641068 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:21:47.314684  641068 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:21:47.314692  641068 out.go:374] Setting ErrFile to fd 2...
	I1202 15:21:47.314696  641068 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:21:47.314930  641068 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-563346/.minikube/bin
	I1202 15:21:47.315423  641068 out.go:368] Setting JSON to false
	I1202 15:21:47.316714  641068 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":7456,"bootTime":1764681451,"procs":241,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 15:21:47.316780  641068 start.go:143] virtualization: kvm guest
	I1202 15:21:47.318463  641068 out.go:179] * [functional-169724] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 15:21:47.319518  641068 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 15:21:47.319534  641068 notify.go:221] Checking for updates...
	I1202 15:21:47.322273  641068 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 15:21:47.323401  641068 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-563346/kubeconfig
	I1202 15:21:47.324434  641068 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-563346/.minikube
	I1202 15:21:47.325717  641068 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 15:21:47.327279  641068 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 15:21:47.329086  641068 config.go:182] Loaded profile config "functional-169724": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1202 15:21:47.329971  641068 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 15:21:47.357518  641068 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 15:21:47.357625  641068 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 15:21:47.421491  641068 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-02 15:21:47.409816405 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 15:21:47.421613  641068 docker.go:319] overlay module found
	I1202 15:21:47.423070  641068 out.go:179] * Using the docker driver based on existing profile
	I1202 15:21:47.424115  641068 start.go:309] selected driver: docker
	I1202 15:21:47.424136  641068 start.go:927] validating driver "docker" against &{Name:functional-169724 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-169724 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 15:21:47.424267  641068 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 15:21:47.424387  641068 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 15:21:47.481154  641068 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-02 15:21:47.471959518 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 15:21:47.481877  641068 cni.go:84] Creating CNI manager for ""
	I1202 15:21:47.481949  641068 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1202 15:21:47.482010  641068 start.go:353] cluster config:
	{Name:functional-169724 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-169724 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 15:21:47.483575  641068 out.go:179] * dry-run validation complete!
	
	
	==> Docker <==
	Dec 02 15:22:05 functional-169724 dockerd[7755]: time="2025-12-02T15:22:05.467560101Z" level=info msg="sbJoin: gwep4 ''->'', gwep6 ''->''" eid=b32f1c40ba5d ep=k8s_POD_sp-pod_default_d8340d35-b3d6-4326-b084-210d089189c8_0 net=none nid=41dbb18abfc2
	Dec 02 15:22:05 functional-169724 cri-dockerd[8507]: time="2025-12-02T15:22:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/cb749a039999dd2401a723e0c1e60d5a92994ce685ff47c06d4aef5217e81059/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Dec 02 15:22:05 functional-169724 cri-dockerd[8507]: time="2025-12-02T15:22:05Z" level=info msg="Stop pulling image docker.io/nginx:latest: Status: Image is up to date for nginx:latest"
	Dec 02 15:22:12 functional-169724 dockerd[7755]: time="2025-12-02T15:22:12.297492774Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 02 15:22:13 functional-169724 dockerd[7755]: time="2025-12-02T15:22:13.199739095Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 02 15:22:13 functional-169724 dockerd[7755]: time="2025-12-02T15:22:13.231312464Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 02 15:22:17 functional-169724 dockerd[7755]: 2025/12/02 15:22:17 http2: server: error reading preface from client @: read unix /var/run/docker.sock->@: read: connection reset by peer
	Dec 02 15:22:18 functional-169724 dockerd[7755]: time="2025-12-02T15:22:18.824698620Z" level=info msg="sbJoin: gwep4 ''->'4689289d919b', gwep6 ''->''"
	Dec 02 15:22:30 functional-169724 cri-dockerd[8507]: time="2025-12-02T15:22:30Z" level=error msg="error getting RW layer size for container ID '981a49c19bbfceeb360f21f717e70de1549aecb3102ef304675c9c5f199d96d8': Error response from daemon: No such container: 981a49c19bbfceeb360f21f717e70de1549aecb3102ef304675c9c5f199d96d8"
	Dec 02 15:22:30 functional-169724 cri-dockerd[8507]: time="2025-12-02T15:22:30Z" level=error msg="Set backoffDuration to : 1m0s for container ID '981a49c19bbfceeb360f21f717e70de1549aecb3102ef304675c9c5f199d96d8'"
	Dec 02 15:22:38 functional-169724 dockerd[7755]: time="2025-12-02T15:22:38.200382357Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 02 15:22:38 functional-169724 dockerd[7755]: time="2025-12-02T15:22:38.292599059Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 02 15:22:38 functional-169724 cri-dockerd[8507]: time="2025-12-02T15:22:38Z" level=info msg="Stop pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: Pulling from kubernetesui/dashboard"
	Dec 02 15:22:39 functional-169724 dockerd[7755]: time="2025-12-02T15:22:39.272473055Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 02 15:23:27 functional-169724 dockerd[7755]: time="2025-12-02T15:23:27.281691411Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 02 15:23:31 functional-169724 dockerd[7755]: time="2025-12-02T15:23:31.198964490Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 02 15:23:31 functional-169724 dockerd[7755]: time="2025-12-02T15:23:31.235678685Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 02 15:24:55 functional-169724 dockerd[7755]: time="2025-12-02T15:24:55.200258483Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 02 15:24:55 functional-169724 dockerd[7755]: time="2025-12-02T15:24:55.295911943Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 02 15:24:55 functional-169724 cri-dockerd[8507]: time="2025-12-02T15:24:55Z" level=info msg="Stop pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: Pulling from kubernetesui/dashboard"
	Dec 02 15:24:57 functional-169724 dockerd[7755]: time="2025-12-02T15:24:57.274599396Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 02 15:27:39 functional-169724 dockerd[7755]: time="2025-12-02T15:27:39.376194214Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 02 15:27:39 functional-169724 cri-dockerd[8507]: time="2025-12-02T15:27:39Z" level=info msg="Stop pulling image docker.io/mysql:5.7: 5.7: Pulling from library/mysql"
	Dec 02 15:27:39 functional-169724 dockerd[7755]: time="2025-12-02T15:27:39.393538894Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 02 15:27:39 functional-169724 dockerd[7755]: time="2025-12-02T15:27:39.425266865Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                  CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	20584dd03c3ce       nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42                          9 minutes ago       Running             myfrontend                  0                   cb749a039999d       sp-pod                                       default
	c092a076c571e       kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c   10 minutes ago      Running             dashboard-metrics-scraper   0                   4cc12c5b503a8       dashboard-metrics-scraper-5565989548-zhkml   kubernetes-dashboard
	c039b5e9295d1       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6            10 minutes ago      Running             echo-server                 0                   ed459e09aa4d8       hello-node-5758569b79-l9dzq                  default
	67e69e5288bb6       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6            10 minutes ago      Running             echo-server                 0                   4f35bc7e21ab4       hello-node-connect-9f67c86d4-j55l8           default
	8cf8bb66fc675       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e    10 minutes ago      Exited              mount-munger                0                   87dc69f17ae61       busybox-mount                                default
	0cb8e8eb6c98c       nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14                          10 minutes ago      Running             nginx                       0                   37aab5a80f061       nginx-svc                                    default
	4c5dab5ebf4f4       aa5e3ebc0dfed                                                                                          10 minutes ago      Running             coredns                     2                   2aee0a7ebd601       coredns-7d764666f9-nd8zq                     kube-system
	ba9422ce22e85       8a4ded35a3eb1                                                                                          10 minutes ago      Running             kube-proxy                  3                   5e0a4a8d35c60       kube-proxy-d9lr9                             kube-system
	6d5ae1ae06bc8       6e38f40d628db                                                                                          10 minutes ago      Running             storage-provisioner         4                   4324084172f5b       storage-provisioner                          kube-system
	97abb5500abc1       7bb6219ddab95                                                                                          10 minutes ago      Running             kube-scheduler              3                   acb10ef6ba257       kube-scheduler-functional-169724             kube-system
	be942be84ea2e       45f3cc72d235f                                                                                          10 minutes ago      Running             kube-controller-manager     3                   8575ae5201a0a       kube-controller-manager-functional-169724    kube-system
	11d06fb246a49       a3e246e9556e9                                                                                          10 minutes ago      Running             etcd                        2                   6f5be81b7f3fb       etcd-functional-169724                       kube-system
	29cc4a6e2e615       aa9d02839d8de                                                                                          10 minutes ago      Running             kube-apiserver              0                   5fee737043a1b       kube-apiserver-functional-169724             kube-system
	7edd953743733       8a4ded35a3eb1                                                                                          10 minutes ago      Exited              kube-proxy                  2                   96b84bc5192e0       kube-proxy-d9lr9                             kube-system
	cdcf8eef29831       45f3cc72d235f                                                                                          10 minutes ago      Exited              kube-controller-manager     2                   e6f69ca09701e       kube-controller-manager-functional-169724    kube-system
	71e80cbfbdd27       7bb6219ddab95                                                                                          10 minutes ago      Exited              kube-scheduler              2                   f8f59ee5d6caf       kube-scheduler-functional-169724             kube-system
	55b0ea4b79d49       6e38f40d628db                                                                                          11 minutes ago      Created             storage-provisioner         3                   4adb513a00693       storage-provisioner                          kube-system
	1d0a06c5343ab       aa5e3ebc0dfed                                                                                          11 minutes ago      Exited              coredns                     1                   209d9310937be       coredns-7d764666f9-nd8zq                     kube-system
	e44038096d500       a3e246e9556e9                                                                                          11 minutes ago      Exited              etcd                        1                   f7e26095932d1       etcd-functional-169724                       kube-system
	
	
	==> coredns [1d0a06c5343a] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:41181 - 48782 "HINFO IN 8599916324902243977.3856247845758161228. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.021557542s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [4c5dab5ebf4f] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:42127 - 12039 "HINFO IN 2776022828749691506.5150390252904704739. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.046412032s
	
	
	==> describe nodes <==
	Name:               functional-169724
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-169724
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f814d1da9a9aaec9cd0504e94606ef30589e1689
	                    minikube.k8s.io/name=functional-169724
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T15_19_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 15:18:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-169724
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 15:31:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 15:31:02 +0000   Tue, 02 Dec 2025 15:18:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 15:31:02 +0000   Tue, 02 Dec 2025 15:18:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 15:31:02 +0000   Tue, 02 Dec 2025 15:18:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 15:31:02 +0000   Tue, 02 Dec 2025 15:19:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-169724
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 c31a325af81b969158c21fa769271857
	  System UUID:                63b0e81a-4f10-411a-8755-281b0479e5a4
	  Boot ID:                    bd6d4341-b6ad-469b-96fd-32b547c9d299
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://29.0.4
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-5758569b79-l9dzq                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-9f67c86d4-j55l8            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-844cf969f6-dbl2r                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m55s
	  kube-system                 coredns-7d764666f9-nd8zq                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-functional-169724                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kube-apiserver-functional-169724              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-169724     200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-d9lr9                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-169724              100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-5565989548-zhkml    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-z9ghp          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (16%)  700m (8%)
	  memory             682Mi (2%)   870Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  12m   node-controller  Node functional-169724 event: Registered Node functional-169724 in Controller
	  Normal  RegisteredNode  11m   node-controller  Node functional-169724 event: Registered Node functional-169724 in Controller
	  Normal  RegisteredNode  10m   node-controller  Node functional-169724 event: Registered Node functional-169724 in Controller
	
	
	==> dmesg <==
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff aa c4 8b 72 23 67 08 06
	[Dec 2 15:12] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ce 33 66 cf 68 fe 08 06
	[  +0.000567] IPv4: martian source 10.244.0.32 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa c4 8b 72 23 67 08 06
	[  +0.000756] IPv4: martian source 10.244.0.32 from 10.244.0.7, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 fc 13 70 ef 7a 08 06
	[Dec 2 15:13] IPv4: martian source 10.244.0.31 from 10.244.0.25, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 6c d3 f0 2d 4b 08 06
	[Dec 2 15:15] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3e e9 14 7c 53 c5 08 06
	[Dec 2 15:16] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c6 9e 41 2b 99 a9 08 06
	[Dec 2 15:17] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000012] ll header: 00000000: ff ff ff ff ff ff 3e 17 2b 55 09 b0 08 06
	[Dec 2 15:18] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 1a 5d a4 9c b5 12 08 06
	[Dec 2 15:19] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9e 1e 7c 51 67 ed 08 06
	[  +0.136746] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ce f0 27 e7 63 6f 08 06
	[Dec 2 15:20] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 66 64 43 d3 ea d3 08 06
	[Dec 2 15:21] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e d7 73 53 48 b3 08 06
	
	
	==> etcd [11d06fb246a4] <==
	{"level":"warn","ts":"2025-12-02T15:21:21.937421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:21:21.944962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:21:21.953046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:21:21.959409Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:21:21.965653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:21:21.973362Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:21:21.983357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:21:21.989862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:21:21.996721Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:21:22.003414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:21:22.010147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:21:22.017337Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:21:22.024086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:21:22.030647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:21:22.037289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:21:22.044122Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:21:22.056084Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:21:22.068845Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:21:22.075260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:21:22.081372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:21:22.088112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:21:22.131529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54148","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-02T15:31:21.637230Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1247}
	{"level":"info","ts":"2025-12-02T15:31:21.657330Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1247,"took":"19.700935ms","hash":282124229,"current-db-size-bytes":3751936,"current-db-size":"3.8 MB","current-db-size-in-use-bytes":1802240,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-12-02T15:31:21.657383Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":282124229,"revision":1247,"compact-revision":-1}
	
	
	==> etcd [e44038096d50] <==
	{"level":"warn","ts":"2025-12-02T15:20:18.102944Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:20:18.110240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:20:18.130716Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:20:18.137543Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:20:18.145240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:20:18.152060Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T15:20:18.197540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57900","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-02T15:21:06.533986Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-02T15:21:06.534077Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-169724","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-12-02T15:21:06.534253Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-02T15:21:13.535652Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-02T15:21:13.535753Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-02T15:21:13.535814Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-12-02T15:21:13.535886Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-02T15:21:13.535942Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-02T15:21:13.535954Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-02T15:21:13.536036Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-02T15:21:13.536046Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-02T15:21:13.536115Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-02T15:21:13.536141Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-02T15:21:13.536150Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-02T15:21:13.539423Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-12-02T15:21:13.539485Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-02T15:21:13.539515Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-12-02T15:21:13.539521Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-169724","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 15:32:00 up  2:14,  0 user,  load average: 0.08, 0.19, 0.91
	Linux functional-169724 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [29cc4a6e2e61] <==
	I1202 15:21:22.638620       1 shared_informer.go:377] "Caches are synced"
	I1202 15:21:22.638638       1 policy_source.go:248] refreshing policies
	I1202 15:21:22.640385       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1202 15:21:23.245488       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1202 15:21:23.245487       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1202 15:21:23.245488       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1202 15:21:23.488408       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1202 15:21:24.334953       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1202 15:21:24.371368       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1202 15:21:24.398800       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1202 15:21:24.404715       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1202 15:21:25.966967       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1202 15:21:26.017714       1 controller.go:667] quota admission added evaluator for: endpoints
	I1202 15:21:40.039541       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.110.181.51"}
	I1202 15:21:46.683320       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.102.106.161"}
	I1202 15:21:47.549397       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1202 15:21:47.631670       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.99.130.65"}
	I1202 15:21:55.079754       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.108.134.79"}
	I1202 15:21:57.923307       1 controller.go:667] quota admission added evaluator for: namespaces
	I1202 15:21:58.062074       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.110.134"}
	I1202 15:21:58.073491       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.118.166"}
	I1202 15:21:58.545355       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.107.51.162"}
	E1202 15:22:04.080451       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:52156: use of closed network connection
	E1202 15:22:11.189951       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:39652: use of closed network connection
	I1202 15:31:22.540748       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [be942be84ea2] <==
	I1202 15:21:25.726929       1 shared_informer.go:377] "Caches are synced"
	I1202 15:21:25.728133       1 shared_informer.go:377] "Caches are synced"
	I1202 15:21:25.728337       1 shared_informer.go:377] "Caches are synced"
	I1202 15:21:25.728514       1 shared_informer.go:377] "Caches are synced"
	I1202 15:21:25.728396       1 shared_informer.go:377] "Caches are synced"
	I1202 15:21:25.728642       1 shared_informer.go:377] "Caches are synced"
	I1202 15:21:25.728743       1 shared_informer.go:377] "Caches are synced"
	I1202 15:21:25.728786       1 shared_informer.go:377] "Caches are synced"
	I1202 15:21:25.728826       1 shared_informer.go:377] "Caches are synced"
	I1202 15:21:25.729297       1 shared_informer.go:377] "Caches are synced"
	I1202 15:21:25.728920       1 shared_informer.go:377] "Caches are synced"
	I1202 15:21:25.728944       1 shared_informer.go:377] "Caches are synced"
	I1202 15:21:25.728940       1 shared_informer.go:377] "Caches are synced"
	I1202 15:21:25.730174       1 shared_informer.go:377] "Caches are synced"
	I1202 15:21:25.731415       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 15:21:25.826011       1 shared_informer.go:377] "Caches are synced"
	I1202 15:21:25.826041       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1202 15:21:25.826046       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1202 15:21:25.831694       1 shared_informer.go:377] "Caches are synced"
	E1202 15:21:57.984423       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:21:57.989421       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:21:57.993493       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:21:57.993617       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:21:57.999312       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1202 15:21:58.004413       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [cdcf8eef2983] <==
	I1202 15:21:19.058440       1 serving.go:386] Generated self-signed cert in-memory
	I1202 15:21:19.065566       1 controllermanager.go:189] "Starting" version="v1.35.0-beta.0"
	I1202 15:21:19.065592       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 15:21:19.067249       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1202 15:21:19.067342       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1202 15:21:19.067442       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1202 15:21:19.067513       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	
	
	==> kube-proxy [7edd95374373] <==
	I1202 15:21:18.848250       1 server_linux.go:53] "Using iptables proxy"
	I1202 15:21:18.932703       1 shared_informer.go:370] "Waiting for caches to sync"
	
	
	==> kube-proxy [ba9422ce22e8] <==
	I1202 15:21:23.842878       1 server_linux.go:53] "Using iptables proxy"
	I1202 15:21:23.905118       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 15:21:24.005691       1 shared_informer.go:377] "Caches are synced"
	I1202 15:21:24.005731       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1202 15:21:24.005857       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 15:21:24.028724       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 15:21:24.028780       1 server_linux.go:136] "Using iptables Proxier"
	I1202 15:21:24.034388       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 15:21:24.034689       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1202 15:21:24.034705       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 15:21:24.035812       1 config.go:309] "Starting node config controller"
	I1202 15:21:24.035872       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 15:21:24.035884       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 15:21:24.035931       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 15:21:24.035944       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 15:21:24.036053       1 config.go:106] "Starting endpoint slice config controller"
	I1202 15:21:24.036100       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 15:21:24.036053       1 config.go:200] "Starting service config controller"
	I1202 15:21:24.036153       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 15:21:24.136338       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1202 15:21:24.136394       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 15:21:24.136366       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [71e80cbfbdd2] <==
	I1202 15:21:18.927381       1 serving.go:386] Generated self-signed cert in-memory
	W1202 15:21:18.930266       1 authentication.go:397] Error looking up in-cluster authentication configuration: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.49.2:8441: connect: connection refused
	W1202 15:21:18.930311       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1202 15:21:18.930323       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1202 15:21:18.940239       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1202 15:21:18.940272       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 15:21:18.942264       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 15:21:18.942303       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 15:21:18.942471       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1202 15:21:18.942618       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1202 15:21:19.180778       1 server.go:286] "handlers are not fully synchronized" err="context canceled"
	I1202 15:21:19.181243       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1202 15:21:19.181284       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1202 15:21:19.181320       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E1202 15:21:19.181366       1 shared_informer.go:373] "Unable to sync caches" logger="UnhandledError"
	I1202 15:21:19.181379       1 configmap_cafile_content.go:213] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 15:21:19.181425       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1202 15:21:19.181693       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [97abb5500abc] <==
	I1202 15:21:22.037813       1 serving.go:386] Generated self-signed cert in-memory
	W1202 15:21:22.524983       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1202 15:21:22.525288       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1202 15:21:22.525444       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1202 15:21:22.525565       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1202 15:21:22.546981       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1202 15:21:22.547008       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 15:21:22.549051       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 15:21:22.549078       1 shared_informer.go:370] "Waiting for caches to sync"
	I1202 15:21:22.549233       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1202 15:21:22.549266       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1202 15:21:22.650016       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 02 15:30:40 functional-169724 kubelet[9712]: E1202 15:30:40.182396    9712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-z9ghp" podUID="7f04674d-ee7f-47b3-a9cb-9b205a1ddcd4"
	Dec 02 15:30:41 functional-169724 kubelet[9712]: E1202 15:30:41.180215    9712 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-169724" containerName="kube-controller-manager"
	Dec 02 15:30:48 functional-169724 kubelet[9712]: E1202 15:30:48.182665    9712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-dbl2r" podUID="a5b42979-2765-4023-afe8-d83c7d58c712"
	Dec 02 15:30:53 functional-169724 kubelet[9712]: E1202 15:30:53.179571    9712 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-z9ghp" containerName="kubernetes-dashboard"
	Dec 02 15:30:53 functional-169724 kubelet[9712]: E1202 15:30:53.182716    9712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-z9ghp" podUID="7f04674d-ee7f-47b3-a9cb-9b205a1ddcd4"
	Dec 02 15:30:59 functional-169724 kubelet[9712]: E1202 15:30:59.182407    9712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-dbl2r" podUID="a5b42979-2765-4023-afe8-d83c7d58c712"
	Dec 02 15:31:06 functional-169724 kubelet[9712]: E1202 15:31:06.188412    9712 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-z9ghp" containerName="kubernetes-dashboard"
	Dec 02 15:31:06 functional-169724 kubelet[9712]: E1202 15:31:06.188452    9712 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-zhkml" containerName="dashboard-metrics-scraper"
	Dec 02 15:31:06 functional-169724 kubelet[9712]: E1202 15:31:06.188541    9712 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-169724" containerName="kube-scheduler"
	Dec 02 15:31:06 functional-169724 kubelet[9712]: E1202 15:31:06.190578    9712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-z9ghp" podUID="7f04674d-ee7f-47b3-a9cb-9b205a1ddcd4"
	Dec 02 15:31:10 functional-169724 kubelet[9712]: E1202 15:31:10.187215    9712 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-169724" containerName="kube-apiserver"
	Dec 02 15:31:13 functional-169724 kubelet[9712]: E1202 15:31:13.182706    9712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-dbl2r" podUID="a5b42979-2765-4023-afe8-d83c7d58c712"
	Dec 02 15:31:19 functional-169724 kubelet[9712]: E1202 15:31:19.179620    9712 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-z9ghp" containerName="kubernetes-dashboard"
	Dec 02 15:31:19 functional-169724 kubelet[9712]: E1202 15:31:19.182350    9712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-z9ghp" podUID="7f04674d-ee7f-47b3-a9cb-9b205a1ddcd4"
	Dec 02 15:31:23 functional-169724 kubelet[9712]: E1202 15:31:23.180367    9712 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-nd8zq" containerName="coredns"
	Dec 02 15:31:28 functional-169724 kubelet[9712]: E1202 15:31:28.182752    9712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-dbl2r" podUID="a5b42979-2765-4023-afe8-d83c7d58c712"
	Dec 02 15:31:33 functional-169724 kubelet[9712]: E1202 15:31:33.179989    9712 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-z9ghp" containerName="kubernetes-dashboard"
	Dec 02 15:31:33 functional-169724 kubelet[9712]: E1202 15:31:33.182517    9712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-z9ghp" podUID="7f04674d-ee7f-47b3-a9cb-9b205a1ddcd4"
	Dec 02 15:31:42 functional-169724 kubelet[9712]: E1202 15:31:42.183464    9712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-dbl2r" podUID="a5b42979-2765-4023-afe8-d83c7d58c712"
	Dec 02 15:31:44 functional-169724 kubelet[9712]: E1202 15:31:44.180714    9712 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-z9ghp" containerName="kubernetes-dashboard"
	Dec 02 15:31:44 functional-169724 kubelet[9712]: E1202 15:31:44.183310    9712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-z9ghp" podUID="7f04674d-ee7f-47b3-a9cb-9b205a1ddcd4"
	Dec 02 15:31:45 functional-169724 kubelet[9712]: E1202 15:31:45.179836    9712 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-169724" containerName="kube-controller-manager"
	Dec 02 15:31:57 functional-169724 kubelet[9712]: E1202 15:31:57.182008    9712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-dbl2r" podUID="a5b42979-2765-4023-afe8-d83c7d58c712"
	Dec 02 15:31:59 functional-169724 kubelet[9712]: E1202 15:31:59.179696    9712 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-z9ghp" containerName="kubernetes-dashboard"
	Dec 02 15:31:59 functional-169724 kubelet[9712]: E1202 15:31:59.182110    9712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-z9ghp" podUID="7f04674d-ee7f-47b3-a9cb-9b205a1ddcd4"
	
	
	==> storage-provisioner [55b0ea4b79d4] <==
	
	
	==> storage-provisioner [6d5ae1ae06bc] <==
	W1202 15:31:35.602140       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:31:37.605859       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:31:37.612020       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:31:39.615874       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:31:39.620742       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:31:41.623938       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:31:41.627735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:31:43.630931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:31:43.638358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:31:45.641677       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:31:45.645403       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:31:47.649366       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:31:47.654697       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:31:49.658320       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:31:49.662400       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:31:51.665660       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:31:51.669601       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:31:53.672740       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:31:53.677446       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:31:55.681767       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:31:55.685727       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:31:57.689211       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:31:57.693344       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:31:59.696290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 15:31:59.701457       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-169724 -n functional-169724
helpers_test.go:269: (dbg) Run:  kubectl --context functional-169724 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount mysql-844cf969f6-dbl2r kubernetes-dashboard-b84665fb8-z9ghp
helpers_test.go:282: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-169724 describe pod busybox-mount mysql-844cf969f6-dbl2r kubernetes-dashboard-b84665fb8-z9ghp
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-169724 describe pod busybox-mount mysql-844cf969f6-dbl2r kubernetes-dashboard-b84665fb8-z9ghp: exit status 1 (83.078587ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-169724/192.168.49.2
	Start Time:       Tue, 02 Dec 2025 15:21:46 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  mount-munger:
	    Container ID:  docker://8cf8bb66fc675c3aadf1d2407e671757956904149a59af8e6fbd98ead10f012c
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Tue, 02 Dec 2025 15:21:50 +0000
	      Finished:     Tue, 02 Dec 2025 15:21:50 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5cbd6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-5cbd6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-169724
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.375s (3.098s including waiting). Image size: 4403845 bytes.
	  Normal  Created    10m   kubelet            Container created
	  Normal  Started    10m   kubelet            Container started
	
	
	Name:             mysql-844cf969f6-dbl2r
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-169724/192.168.49.2
	Start Time:       Tue, 02 Dec 2025 15:21:58 +0000
	Labels:           app=mysql
	                  pod-template-hash=844cf969f6
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.16
	IPs:
	  IP:           10.244.0.16
	Controlled By:  ReplicaSet/mysql-844cf969f6
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2klr6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-2klr6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/mysql-844cf969f6-dbl2r to functional-169724
	  Normal   Pulling    7m3s (x5 over 10m)    kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     7m3s (x5 over 10m)    kubelet            Failed to pull image "docker.io/mysql:5.7": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m3s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Normal   BackOff    4m48s (x22 over 10m)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     4m48s (x22 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-z9ghp" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-169724 describe pod busybox-mount mysql-844cf969f6-dbl2r kubernetes-dashboard-b84665fb8-z9ghp: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (602.56s)

                                                
                                    

Test pass (405/435)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.34
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.23
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.2/json-events 3.2
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.08
18 TestDownloadOnly/v1.34.2/DeleteAll 0.23
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.15
21 TestDownloadOnly/v1.35.0-beta.0/json-events 2.15
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.24
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.16
29 TestDownloadOnlyKic 0.44
30 TestBinaryMirror 0.86
31 TestOffline 94.06
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 95.53
38 TestAddons/serial/Volcano 40.19
40 TestAddons/serial/GCPAuth/Namespaces 0.12
41 TestAddons/serial/GCPAuth/FakeCredentials 8.54
44 TestAddons/parallel/Registry 15.68
45 TestAddons/parallel/RegistryCreds 0.66
46 TestAddons/parallel/Ingress 19.85
47 TestAddons/parallel/InspektorGadget 11.72
48 TestAddons/parallel/MetricsServer 5.74
50 TestAddons/parallel/CSI 56.46
51 TestAddons/parallel/Headlamp 17.41
52 TestAddons/parallel/CloudSpanner 5.48
53 TestAddons/parallel/LocalPath 51.64
54 TestAddons/parallel/NvidiaDevicePlugin 5.46
55 TestAddons/parallel/Yakd 10.67
56 TestAddons/parallel/AmdGpuDevicePlugin 5.53
57 TestAddons/StoppedEnableDisable 11.25
58 TestCertOptions 27.84
59 TestCertExpiration 243.06
60 TestDockerFlags 27.14
61 TestForceSystemdFlag 24.55
62 TestForceSystemdEnv 31.77
67 TestErrorSpam/setup 24.8
68 TestErrorSpam/start 0.69
69 TestErrorSpam/status 0.97
70 TestErrorSpam/pause 1.29
71 TestErrorSpam/unpause 1.37
72 TestErrorSpam/stop 11.12
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 69.46
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 50.68
79 TestFunctional/serial/KubeContext 0.05
80 TestFunctional/serial/KubectlGetPods 0.06
83 TestFunctional/serial/CacheCmd/cache/add_remote 2.2
84 TestFunctional/serial/CacheCmd/cache/add_local 0.83
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
86 TestFunctional/serial/CacheCmd/cache/list 0.07
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.32
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.43
89 TestFunctional/serial/CacheCmd/cache/delete 0.13
90 TestFunctional/serial/MinikubeKubectlCmd 0.13
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
92 TestFunctional/serial/ExtraConfig 51.55
93 TestFunctional/serial/ComponentHealth 0.07
94 TestFunctional/serial/LogsCmd 1.14
95 TestFunctional/serial/LogsFileCmd 1.16
96 TestFunctional/serial/InvalidService 4.25
98 TestFunctional/parallel/ConfigCmd 0.5
99 TestFunctional/parallel/DashboardCmd 8.86
100 TestFunctional/parallel/DryRun 0.43
101 TestFunctional/parallel/InternationalLanguage 0.21
102 TestFunctional/parallel/StatusCmd 1.1
106 TestFunctional/parallel/ServiceCmdConnect 18.76
107 TestFunctional/parallel/AddonsCmd 0.18
108 TestFunctional/parallel/PersistentVolumeClaim 28.41
110 TestFunctional/parallel/SSHCmd 0.68
111 TestFunctional/parallel/CpCmd 2.22
112 TestFunctional/parallel/MySQL 23.4
113 TestFunctional/parallel/FileSync 0.35
114 TestFunctional/parallel/CertSync 2.18
118 TestFunctional/parallel/NodeLabels 0.07
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.38
122 TestFunctional/parallel/License 0.3
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.6
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
127 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 17.31
128 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
129 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
134 TestFunctional/parallel/ServiceCmd/DeployApp 7.15
135 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
136 TestFunctional/parallel/ProfileCmd/profile_list 0.42
137 TestFunctional/parallel/ProfileCmd/profile_json_output 0.44
138 TestFunctional/parallel/MountCmd/any-port 8.01
139 TestFunctional/parallel/ServiceCmd/List 1.75
140 TestFunctional/parallel/ServiceCmd/JSONOutput 1.75
141 TestFunctional/parallel/ServiceCmd/HTTPS 0.63
142 TestFunctional/parallel/ServiceCmd/Format 0.63
143 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
144 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
145 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
146 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
147 TestFunctional/parallel/ImageCommands/ImageBuild 2.53
148 TestFunctional/parallel/ImageCommands/Setup 0.5
149 TestFunctional/parallel/MountCmd/specific-port 2.23
150 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.11
151 TestFunctional/parallel/ServiceCmd/URL 0.66
152 TestFunctional/parallel/Version/short 0.08
153 TestFunctional/parallel/Version/components 0.57
154 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.06
155 TestFunctional/parallel/DockerEnv/bash 1.19
156 TestFunctional/parallel/MountCmd/VerifyCleanup 1.98
157 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.1
158 TestFunctional/parallel/UpdateContextCmd/no_changes 0.18
159 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.18
160 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.18
161 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.37
162 TestFunctional/parallel/ImageCommands/ImageRemove 0.5
163 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.6
164 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.38
165 TestFunctional/delete_echo-server_images 0.04
166 TestFunctional/delete_my-image_image 0.02
167 TestFunctional/delete_minikube_cached_images 0.02
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 69.03
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 51.4
175 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.05
176 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 0.07
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 2.09
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 0.76
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.07
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.07
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.31
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.46
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.15
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 0.14
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 0.12
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 54.61
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 0.07
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 1.06
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.05
192 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 4.7
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.57
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.43
197 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.22
198 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 1.18
202 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 10.6
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.18
204 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 24.64
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.72
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 1.87
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.31
210 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.83
214 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 0.06
216 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.28
218 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.3
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv/bash 1.25
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 8.12
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.53
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.54
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.6
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.51
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 8.26
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.99
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 0
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.11
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 9.2
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.84
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.07
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.53
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.16
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.17
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.16
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 1.74
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 1.72
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.55
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.55
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.54
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.25
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.26
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.26
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.25
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 2.7
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.17
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.02
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 0.91
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.06
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.34
258 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.47
259 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.64
260 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.38
261 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.05
262 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
263 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
267 TestMultiControlPlane/serial/StartCluster 137.49
268 TestMultiControlPlane/serial/DeployApp 5.16
269 TestMultiControlPlane/serial/PingHostFromPods 1.36
270 TestMultiControlPlane/serial/AddWorkerNode 33.91
271 TestMultiControlPlane/serial/NodeLabels 0.07
272 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.94
273 TestMultiControlPlane/serial/CopyFile 18.67
274 TestMultiControlPlane/serial/StopSecondaryNode 11.7
275 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.75
276 TestMultiControlPlane/serial/RestartSecondaryNode 37.28
277 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.97
278 TestMultiControlPlane/serial/RestartClusterKeepsNodes 173.06
279 TestMultiControlPlane/serial/DeleteSecondaryNode 9.75
280 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.74
281 TestMultiControlPlane/serial/StopCluster 32.67
282 TestMultiControlPlane/serial/RestartCluster 97.7
283 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.76
284 TestMultiControlPlane/serial/AddSecondaryNode 48.91
285 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.94
288 TestImageBuild/serial/Setup 25.48
289 TestImageBuild/serial/NormalBuild 1.01
290 TestImageBuild/serial/BuildWithBuildArg 0.68
291 TestImageBuild/serial/BuildWithDockerIgnore 0.49
292 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.52
297 TestJSONOutput/start/Command 70
298 TestJSONOutput/start/Audit 0
300 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
301 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
303 TestJSONOutput/pause/Command 0.54
304 TestJSONOutput/pause/Audit 0
306 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
307 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
309 TestJSONOutput/unpause/Command 0.53
310 TestJSONOutput/unpause/Audit 0
312 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
313 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
315 TestJSONOutput/stop/Command 10.93
316 TestJSONOutput/stop/Audit 0
318 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
319 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
320 TestErrorJSONOutput 0.25
322 TestKicCustomNetwork/create_custom_network 23.35
323 TestKicCustomNetwork/use_default_bridge_network 22.82
324 TestKicExistingNetwork 25.84
325 TestKicCustomSubnet 24.56
326 TestKicStaticIP 27.16
327 TestMainNoArgs 0.07
328 TestMinikubeProfile 51.96
331 TestMountStart/serial/StartWithMountFirst 9.55
332 TestMountStart/serial/VerifyMountFirst 0.3
333 TestMountStart/serial/StartWithMountSecond 6.64
334 TestMountStart/serial/VerifyMountSecond 0.29
335 TestMountStart/serial/DeleteFirst 1.55
336 TestMountStart/serial/VerifyMountPostDelete 0.3
337 TestMountStart/serial/Stop 1.26
338 TestMountStart/serial/RestartStopped 8.4
339 TestMountStart/serial/VerifyMountPostStop 0.3
342 TestMultiNode/serial/FreshStart2Nodes 78.69
343 TestMultiNode/serial/DeployApp2Nodes 4.13
344 TestMultiNode/serial/PingHostFrom2Pods 0.96
345 TestMultiNode/serial/AddNode 33.96
346 TestMultiNode/serial/MultiNodeLabels 0.08
347 TestMultiNode/serial/ProfileList 0.72
348 TestMultiNode/serial/CopyFile 10.78
349 TestMultiNode/serial/StopNode 2.36
350 TestMultiNode/serial/StartAfterStop 8.8
351 TestMultiNode/serial/RestartKeepsNodes 74.64
352 TestMultiNode/serial/DeleteNode 5.41
353 TestMultiNode/serial/StopMultiNode 21.95
354 TestMultiNode/serial/RestartMultiNode 46.34
355 TestMultiNode/serial/ValidateNameConflict 28.37
360 TestPreload 135.53
362 TestScheduledStopUnix 97.37
363 TestSkaffold 76.99
365 TestInsufficientStorage 9.53
366 TestRunningBinaryUpgrade 326.74
368 TestKubernetesUpgrade 348.85
369 TestMissingContainerUpgrade 99.66
371 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
372 TestStoppedBinaryUpgrade/Setup 0.62
373 TestNoKubernetes/serial/StartWithK8s 43.65
374 TestStoppedBinaryUpgrade/Upgrade 337.65
375 TestNoKubernetes/serial/StartWithStopK8s 16.57
376 TestNoKubernetes/serial/Start 8.66
377 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
378 TestNoKubernetes/serial/VerifyK8sNotRunning 0.34
379 TestNoKubernetes/serial/ProfileList 31.28
380 TestNoKubernetes/serial/Stop 1.88
381 TestNoKubernetes/serial/StartNoArgs 8.29
382 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.32
401 TestStoppedBinaryUpgrade/MinikubeLogs 1.31
403 TestPause/serial/Start 72.44
404 TestNetworkPlugins/group/auto/Start 69.32
405 TestNetworkPlugins/group/kindnet/Start 53.33
406 TestNetworkPlugins/group/calico/Start 67.3
407 TestPause/serial/SecondStartNoReconfiguration 52.71
408 TestNetworkPlugins/group/auto/KubeletFlags 0.38
409 TestNetworkPlugins/group/auto/NetCatPod 10.31
410 TestNetworkPlugins/group/auto/DNS 0.16
411 TestNetworkPlugins/group/auto/Localhost 0.14
412 TestNetworkPlugins/group/auto/HairPin 0.18
413 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
414 TestNetworkPlugins/group/custom-flannel/Start 43.16
415 TestNetworkPlugins/group/kindnet/KubeletFlags 0.55
416 TestNetworkPlugins/group/kindnet/NetCatPod 9.76
417 TestNetworkPlugins/group/kindnet/DNS 0.16
418 TestNetworkPlugins/group/kindnet/Localhost 0.14
419 TestNetworkPlugins/group/kindnet/HairPin 0.15
420 TestPause/serial/Pause 0.56
421 TestPause/serial/VerifyStatus 0.37
422 TestNetworkPlugins/group/calico/ControllerPod 6.01
423 TestPause/serial/Unpause 0.65
424 TestPause/serial/PauseAgain 0.63
425 TestPause/serial/DeletePaused 2.59
426 TestPause/serial/VerifyDeletedResources 2.54
427 TestNetworkPlugins/group/calico/KubeletFlags 0.36
428 TestNetworkPlugins/group/calico/NetCatPod 12.27
429 TestNetworkPlugins/group/false/Start 42.26
430 TestNetworkPlugins/group/enable-default-cni/Start 41.06
431 TestNetworkPlugins/group/calico/DNS 0.18
432 TestNetworkPlugins/group/calico/Localhost 0.14
433 TestNetworkPlugins/group/calico/HairPin 0.16
434 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.36
435 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.23
436 TestNetworkPlugins/group/custom-flannel/DNS 0.19
437 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
438 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
439 TestNetworkPlugins/group/flannel/Start 42.96
440 TestNetworkPlugins/group/false/KubeletFlags 0.35
441 TestNetworkPlugins/group/false/NetCatPod 9.21
442 TestNetworkPlugins/group/false/DNS 0.16
443 TestNetworkPlugins/group/false/Localhost 0.17
444 TestNetworkPlugins/group/false/HairPin 0.12
445 TestNetworkPlugins/group/bridge/Start 67.91
446 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.36
447 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.23
448 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
449 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
450 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
451 TestNetworkPlugins/group/kubenet/Start 68.28
452 TestNetworkPlugins/group/flannel/ControllerPod 6.01
453 TestNetworkPlugins/group/flannel/KubeletFlags 0.43
454 TestNetworkPlugins/group/flannel/NetCatPod 10.19
456 TestStartStop/group/old-k8s-version/serial/FirstStart 81.71
457 TestNetworkPlugins/group/flannel/DNS 0.17
458 TestNetworkPlugins/group/flannel/Localhost 0.16
459 TestNetworkPlugins/group/flannel/HairPin 0.13
461 TestStartStop/group/no-preload/serial/FirstStart 68.52
462 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
463 TestNetworkPlugins/group/bridge/NetCatPod 8.19
464 TestNetworkPlugins/group/bridge/DNS 0.16
465 TestNetworkPlugins/group/bridge/Localhost 0.13
466 TestNetworkPlugins/group/bridge/HairPin 0.14
467 TestNetworkPlugins/group/kubenet/KubeletFlags 0.38
468 TestNetworkPlugins/group/kubenet/NetCatPod 9.24
470 TestStartStop/group/embed-certs/serial/FirstStart 39.34
471 TestNetworkPlugins/group/kubenet/DNS 0.16
472 TestNetworkPlugins/group/kubenet/Localhost 0.15
473 TestNetworkPlugins/group/kubenet/HairPin 0.14
474 TestStartStop/group/old-k8s-version/serial/DeployApp 8.31
476 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 63.13
477 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.59
478 TestStartStop/group/old-k8s-version/serial/Stop 11.15
479 TestStartStop/group/no-preload/serial/DeployApp 9.29
480 TestStartStop/group/embed-certs/serial/DeployApp 8.35
481 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.25
482 TestStartStop/group/old-k8s-version/serial/SecondStart 53.49
483 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.97
484 TestStartStop/group/no-preload/serial/Stop 11.19
485 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.01
486 TestStartStop/group/embed-certs/serial/Stop 11.11
487 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.24
488 TestStartStop/group/no-preload/serial/SecondStart 49.8
489 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.23
490 TestStartStop/group/embed-certs/serial/SecondStart 49.71
491 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.27
492 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
493 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.86
494 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.15
495 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
496 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
497 TestStartStop/group/old-k8s-version/serial/Pause 2.74
498 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
499 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.31
500 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 49.61
501 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
503 TestStartStop/group/newest-cni/serial/FirstStart 32.14
504 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.14
505 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
506 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.36
507 TestStartStop/group/no-preload/serial/Pause 3.92
508 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.36
509 TestStartStop/group/embed-certs/serial/Pause 4.03
510 TestStartStop/group/newest-cni/serial/DeployApp 0
511 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.83
512 TestStartStop/group/newest-cni/serial/Stop 11.06
513 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
514 TestStartStop/group/newest-cni/serial/SecondStart 13.25
515 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
516 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
517 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
518 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
519 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
520 TestStartStop/group/newest-cni/serial/Pause 2.77
521 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.28
522 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.98
x
+
TestDownloadOnly/v1.28.0/json-events (5.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-029797 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-029797 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (5.340310226s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1202 15:10:04.614894  567092 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
I1202 15:10:04.615021  567092 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22021-563346/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-029797
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-029797: exit status 85 (76.387963ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                     ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-029797 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-029797 │ jenkins │ v1.37.0 │ 02 Dec 25 15:09 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 15:09:59
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 15:09:59.331282  567104 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:09:59.331381  567104 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:09:59.331385  567104 out.go:374] Setting ErrFile to fd 2...
	I1202 15:09:59.331389  567104 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:09:59.331611  567104 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-563346/.minikube/bin
	W1202 15:09:59.331734  567104 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22021-563346/.minikube/config/config.json: open /home/jenkins/minikube-integration/22021-563346/.minikube/config/config.json: no such file or directory
	I1202 15:09:59.332428  567104 out.go:368] Setting JSON to true
	I1202 15:09:59.333402  567104 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":6748,"bootTime":1764681451,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 15:09:59.333472  567104 start.go:143] virtualization: kvm guest
	I1202 15:09:59.337008  567104 out.go:99] [download-only-029797] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1202 15:09:59.337256  567104 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22021-563346/.minikube/cache/preloaded-tarball: no such file or directory
	I1202 15:09:59.337275  567104 notify.go:221] Checking for updates...
	I1202 15:09:59.338432  567104 out.go:171] MINIKUBE_LOCATION=22021
	I1202 15:09:59.339815  567104 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 15:09:59.341251  567104 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22021-563346/kubeconfig
	I1202 15:09:59.342514  567104 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-563346/.minikube
	I1202 15:09:59.343691  567104 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1202 15:09:59.345931  567104 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1202 15:09:59.346268  567104 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 15:09:59.371226  567104 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 15:09:59.371407  567104 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 15:09:59.428861  567104 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-02 15:09:59.418825767 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 15:09:59.428973  567104 docker.go:319] overlay module found
	I1202 15:09:59.430614  567104 out.go:99] Using the docker driver based on user configuration
	I1202 15:09:59.430672  567104 start.go:309] selected driver: docker
	I1202 15:09:59.430683  567104 start.go:927] validating driver "docker" against <nil>
	I1202 15:09:59.430821  567104 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 15:09:59.487071  567104 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-02 15:09:59.477735683 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 15:09:59.487262  567104 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1202 15:09:59.487790  567104 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1202 15:09:59.487935  567104 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1202 15:09:59.489454  567104 out.go:171] Using Docker driver with root privileges
	I1202 15:09:59.490579  567104 cni.go:84] Creating CNI manager for ""
	I1202 15:09:59.490657  567104 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1202 15:09:59.490673  567104 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1202 15:09:59.490751  567104 start.go:353] cluster config:
	{Name:download-only-029797 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-029797 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 15:09:59.492071  567104 out.go:99] Starting "download-only-029797" primary control-plane node in "download-only-029797" cluster
	I1202 15:09:59.492109  567104 cache.go:134] Beginning downloading kic base image for docker with docker
	I1202 15:09:59.493292  567104 out.go:99] Pulling base image v0.0.48-1764169655-21974 ...
	I1202 15:09:59.493336  567104 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1202 15:09:59.493463  567104 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 15:09:59.511320  567104 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b to local cache
	I1202 15:09:59.511565  567104 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local cache directory
	I1202 15:09:59.511666  567104 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	I1202 15:09:59.511693  567104 cache.go:65] Caching tarball of preloaded images
	I1202 15:09:59.511670  567104 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b to local cache
	I1202 15:09:59.511846  567104 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1202 15:09:59.513364  567104 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1202 15:09:59.513391  567104 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4 from gcs api...
	I1202 15:09:59.534830  567104 preload.go:295] Got checksum from GCS API "8a955be835827bc584bcce0658a7fcc9"
	I1202 15:09:59.534988  567104 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4?checksum=md5:8a955be835827bc584bcce0658a7fcc9 -> /home/jenkins/minikube-integration/22021-563346/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	I1202 15:10:02.267224  567104 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on docker
	I1202 15:10:02.267761  567104 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/download-only-029797/config.json ...
	I1202 15:10:02.267807  567104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/download-only-029797/config.json: {Name:mkbe699ef4f05319fb2178747a8d370c8346cf4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 15:10:02.268003  567104 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1202 15:10:02.268234  567104 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/22021-563346/.minikube/cache/linux/amd64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-029797 host does not exist
	  To start a cluster, run: "minikube start -p download-only-029797"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-029797
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (3.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-028278 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-028278 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=docker --driver=docker  --container-runtime=docker: (3.197794361s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (3.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1202 15:10:08.271298  567092 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
I1202 15:10:08.271350  567092 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22021-563346/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-028278
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-028278: exit status 85 (79.092447ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                     ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-029797 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-029797 │ jenkins │ v1.37.0 │ 02 Dec 25 15:09 UTC │                     │
	│ delete  │ --all                                                                                                                                                                         │ minikube             │ jenkins │ v1.37.0 │ 02 Dec 25 15:10 UTC │ 02 Dec 25 15:10 UTC │
	│ delete  │ -p download-only-029797                                                                                                                                                       │ download-only-029797 │ jenkins │ v1.37.0 │ 02 Dec 25 15:10 UTC │ 02 Dec 25 15:10 UTC │
	│ start   │ -o=json --download-only -p download-only-028278 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-028278 │ jenkins │ v1.37.0 │ 02 Dec 25 15:10 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 15:10:05
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 15:10:05.128640  567461 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:10:05.128751  567461 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:10:05.128755  567461 out.go:374] Setting ErrFile to fd 2...
	I1202 15:10:05.128759  567461 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:10:05.128930  567461 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-563346/.minikube/bin
	I1202 15:10:05.129424  567461 out.go:368] Setting JSON to true
	I1202 15:10:05.130304  567461 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":6754,"bootTime":1764681451,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 15:10:05.130365  567461 start.go:143] virtualization: kvm guest
	I1202 15:10:05.132120  567461 out.go:99] [download-only-028278] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 15:10:05.132354  567461 notify.go:221] Checking for updates...
	I1202 15:10:05.133685  567461 out.go:171] MINIKUBE_LOCATION=22021
	I1202 15:10:05.135445  567461 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 15:10:05.136781  567461 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22021-563346/kubeconfig
	I1202 15:10:05.140417  567461 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-563346/.minikube
	I1202 15:10:05.141467  567461 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1202 15:10:05.143442  567461 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1202 15:10:05.143743  567461 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 15:10:05.169011  567461 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 15:10:05.169091  567461 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 15:10:05.231085  567461 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:51 SystemTime:2025-12-02 15:10:05.220354519 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 15:10:05.231217  567461 docker.go:319] overlay module found
	I1202 15:10:05.232700  567461 out.go:99] Using the docker driver based on user configuration
	I1202 15:10:05.232739  567461 start.go:309] selected driver: docker
	I1202 15:10:05.232751  567461 start.go:927] validating driver "docker" against <nil>
	I1202 15:10:05.232842  567461 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 15:10:05.290877  567461 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:51 SystemTime:2025-12-02 15:10:05.281078459 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 15:10:05.291043  567461 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1202 15:10:05.291564  567461 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1202 15:10:05.291713  567461 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1202 15:10:05.293399  567461 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-028278 host does not exist
	  To start a cluster, run: "minikube start -p download-only-028278"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-028278
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (2.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-052609 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-052609 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (2.154103072s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (2.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
--- PASS: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
--- PASS: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-052609
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-052609: exit status 85 (79.663831ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                         │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-029797 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker        │ download-only-029797 │ jenkins │ v1.37.0 │ 02 Dec 25 15:09 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                │ minikube             │ jenkins │ v1.37.0 │ 02 Dec 25 15:10 UTC │ 02 Dec 25 15:10 UTC │
	│ delete  │ -p download-only-029797                                                                                                                                                              │ download-only-029797 │ jenkins │ v1.37.0 │ 02 Dec 25 15:10 UTC │ 02 Dec 25 15:10 UTC │
	│ start   │ -o=json --download-only -p download-only-028278 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=docker --driver=docker  --container-runtime=docker        │ download-only-028278 │ jenkins │ v1.37.0 │ 02 Dec 25 15:10 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                │ minikube             │ jenkins │ v1.37.0 │ 02 Dec 25 15:10 UTC │ 02 Dec 25 15:10 UTC │
	│ delete  │ -p download-only-028278                                                                                                                                                              │ download-only-028278 │ jenkins │ v1.37.0 │ 02 Dec 25 15:10 UTC │ 02 Dec 25 15:10 UTC │
	│ start   │ -o=json --download-only -p download-only-052609 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-052609 │ jenkins │ v1.37.0 │ 02 Dec 25 15:10 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 15:10:08
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 15:10:08.791498  567798 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:10:08.791770  567798 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:10:08.791778  567798 out.go:374] Setting ErrFile to fd 2...
	I1202 15:10:08.791783  567798 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:10:08.791983  567798 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-563346/.minikube/bin
	I1202 15:10:08.792497  567798 out.go:368] Setting JSON to true
	I1202 15:10:08.793374  567798 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":6758,"bootTime":1764681451,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 15:10:08.793435  567798 start.go:143] virtualization: kvm guest
	I1202 15:10:08.795531  567798 out.go:99] [download-only-052609] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 15:10:08.795717  567798 notify.go:221] Checking for updates...
	I1202 15:10:08.796913  567798 out.go:171] MINIKUBE_LOCATION=22021
	I1202 15:10:08.798280  567798 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 15:10:08.799567  567798 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22021-563346/kubeconfig
	I1202 15:10:08.800642  567798 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-563346/.minikube
	I1202 15:10:08.801788  567798 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1202 15:10:08.803956  567798 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1202 15:10:08.804217  567798 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 15:10:08.827506  567798 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 15:10:08.827616  567798 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 15:10:08.881582  567798 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-12-02 15:10:08.871957643 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 15:10:08.881707  567798 docker.go:319] overlay module found
	I1202 15:10:08.883531  567798 out.go:99] Using the docker driver based on user configuration
	I1202 15:10:08.883573  567798 start.go:309] selected driver: docker
	I1202 15:10:08.883581  567798 start.go:927] validating driver "docker" against <nil>
	I1202 15:10:08.883693  567798 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 15:10:08.943099  567798 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-12-02 15:10:08.933343503 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 15:10:08.943353  567798 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1202 15:10:08.944065  567798 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1202 15:10:08.944294  567798 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1202 15:10:08.945932  567798 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-052609 host does not exist
	  To start a cluster, run: "minikube start -p download-only-052609"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-052609
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.44s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-624737 --alsologtostderr --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "download-docker-624737" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-624737
--- PASS: TestDownloadOnlyKic (0.44s)

                                                
                                    
x
+
TestBinaryMirror (0.86s)

                                                
                                                
=== RUN   TestBinaryMirror
I1202 15:10:12.342263  567092 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-715361 --alsologtostderr --binary-mirror http://127.0.0.1:40193 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-715361" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-715361
--- PASS: TestBinaryMirror (0.86s)

                                                
                                    
x
+
TestOffline (94.06s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-621963 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-621963 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=docker: (1m31.731984922s)
helpers_test.go:175: Cleaning up "offline-docker-621963" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-621963
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-621963: (2.32617681s)
--- PASS: TestOffline (94.06s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-029941
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-029941: exit status 85 (68.084655ms)

                                                
                                                
-- stdout --
	* Profile "addons-029941" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-029941"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-029941
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-029941: exit status 85 (72.155091ms)

                                                
                                                
-- stdout --
	* Profile "addons-029941" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-029941"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (95.53s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-029941 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-029941 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (1m35.530850775s)
--- PASS: TestAddons/Setup (95.53s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.19s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:876: volcano-admission stabilized in 17.111177ms
addons_test.go:868: volcano-scheduler stabilized in 17.374787ms
addons_test.go:884: volcano-controller stabilized in 17.417874ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-rrbkd" [5e788257-3d12-4033-9630-f459dcc71f37] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003567965s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-6c447bd768-tkprw" [11bc34e4-d0ec-40ee-b19b-6e6ae0dfc9b2] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004584781s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-6fd4f85cb8-dv9p6" [44c5cd5e-2dbb-4d26-bb6b-1dc13262513d] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 6.003737487s
addons_test.go:903: (dbg) Run:  kubectl --context addons-029941 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-029941 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-029941 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [0a1d5f67-b814-4800-b3dc-18c7cade8daa] Pending
helpers_test.go:352: "test-job-nginx-0" [0a1d5f67-b814-4800-b3dc-18c7cade8daa] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [0a1d5f67-b814-4800-b3dc-18c7cade8daa] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.004137449s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-029941 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-029941 addons disable volcano --alsologtostderr -v=1: (11.830403743s)
--- PASS: TestAddons/serial/Volcano (40.19s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-029941 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-029941 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.54s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-029941 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-029941 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [ed266cb1-0441-4bd1-ad84-15c446673b0c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [ed266cb1-0441-4bd1-ad84-15c446673b0c] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.004036709s
addons_test.go:694: (dbg) Run:  kubectl --context addons-029941 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-029941 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-029941 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.54s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.68s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.359565ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-xj8j7" [27d270f3-9e64-44af-a11c-2aca83b822e4] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.00313078s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-7bxcn" [8cb0e847-f086-4d56-8adb-e036473a01e9] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.00387308s
addons_test.go:392: (dbg) Run:  kubectl --context addons-029941 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-029941 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-029941 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.834312614s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-029941 ip
2025/12/02 15:13:01 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-029941 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.68s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.66s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.110836ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-029941
addons_test.go:332: (dbg) Run:  kubectl --context addons-029941 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-029941 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.66s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.85s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-029941 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-029941 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-029941 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [287871a1-3ccd-4ca8-8008-5796a39f620d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [287871a1-3ccd-4ca8-8008-5796a39f620d] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003221386s
I1202 15:13:01.819145  567092 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-029941 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-029941 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-029941 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-029941 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-029941 addons disable ingress-dns --alsologtostderr -v=1: (1.75659907s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-029941 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-029941 addons disable ingress --alsologtostderr -v=1: (7.713988779s)
--- PASS: TestAddons/parallel/Ingress (19.85s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.72s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-hk2cj" [cf7ae93c-670b-49f7-b7c1-b50b07cbecf2] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.04055704s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-029941 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-029941 addons disable inspektor-gadget --alsologtostderr -v=1: (5.681819826s)
--- PASS: TestAddons/parallel/InspektorGadget (11.72s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.74s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.943221ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-d5r4v" [8501478f-96bd-4bad-b9e6-b483cd9c08c2] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003245194s
addons_test.go:463: (dbg) Run:  kubectl --context addons-029941 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-029941 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.74s)

                                                
                                    
x
+
TestAddons/parallel/CSI (56.46s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1202 15:12:52.154814  567092 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1202 15:12:52.159153  567092 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1202 15:12:52.159218  567092 kapi.go:107] duration metric: took 4.440878ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 4.458335ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-029941 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-029941 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-029941 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-029941 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-029941 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-029941 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-029941 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-029941 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-029941 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-029941 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-029941 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-029941 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-029941 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-029941 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-029941 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-029941 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-029941 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-029941 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [fd44adfe-3c2f-4b68-9a11-e5650554756a] Pending
helpers_test.go:352: "task-pv-pod" [fd44adfe-3c2f-4b68-9a11-e5650554756a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [fd44adfe-3c2f-4b68-9a11-e5650554756a] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.003331784s
addons_test.go:572: (dbg) Run:  kubectl --context addons-029941 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-029941 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-029941 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-029941 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-029941 delete pod task-pv-pod: (1.141449826s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-029941 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-029941 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-029941 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-029941 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-029941 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-029941 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-029941 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-029941 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-029941 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-029941 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-029941 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-029941 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-029941 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-029941 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-029941 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-029941 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-029941 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-029941 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-029941 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [470c6bfa-18dc-40b7-80d7-8f1afff2724f] Pending
helpers_test.go:352: "task-pv-pod-restore" [470c6bfa-18dc-40b7-80d7-8f1afff2724f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [470c6bfa-18dc-40b7-80d7-8f1afff2724f] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.00408876s
addons_test.go:614: (dbg) Run:  kubectl --context addons-029941 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-029941 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-029941 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-029941 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-029941 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-029941 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.559567979s)
--- PASS: TestAddons/parallel/CSI (56.46s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.41s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-029941 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-dfcdc64b-zszr4" [6cb15d45-a485-486b-b64e-d7e8861449a2] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-dfcdc64b-zszr4" [6cb15d45-a485-486b-b64e-d7e8861449a2] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.004158827s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-029941 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-029941 addons disable headlamp --alsologtostderr -v=1: (5.644590386s)
--- PASS: TestAddons/parallel/Headlamp (17.41s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.48s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-tcxgs" [752c76ac-606e-4480-9786-c98ae12cf896] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003499946s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-029941 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.48s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.64s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-029941 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-029941 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-029941 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-029941 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-029941 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-029941 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-029941 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-029941 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [bf02b827-c810-4b14-a7e4-b7c6d35e1fdd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [bf02b827-c810-4b14-a7e4-b7c6d35e1fdd] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [bf02b827-c810-4b14-a7e4-b7c6d35e1fdd] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.004098687s
addons_test.go:967: (dbg) Run:  kubectl --context addons-029941 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-029941 ssh "cat /opt/local-path-provisioner/pvc-1a8a8381-1d83-4f56-a446-e58aedf74e8a_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-029941 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-029941 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-029941 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-029941 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.714455801s)
--- PASS: TestAddons/parallel/LocalPath (51.64s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.46s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-lfwjv" [8cb62176-a435-4ca0-817d-cd5bfe460572] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003968302s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-029941 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.46s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.67s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-xxqrl" [9477c821-7691-4fdc-8534-afd878b7e6df] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003789155s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-029941 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-029941 addons disable yakd --alsologtostderr -v=1: (5.666636403s)
--- PASS: TestAddons/parallel/Yakd (10.67s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.53s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-w9w9q" [8e111299-d271-4de7-a26d-200e76792575] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003225948s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-029941 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (5.53s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.25s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-029941
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-029941: (10.949545678s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-029941
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-029941
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-029941
--- PASS: TestAddons/StoppedEnableDisable (11.25s)

                                                
                                    
x
+
TestCertOptions (27.84s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-407491 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-407491 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (23.164023075s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-407491 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-407491 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-407491 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-407491" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-407491
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-407491: (3.916911631s)
--- PASS: TestCertOptions (27.84s)

                                                
                                    
x
+
TestCertExpiration (243.06s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-862350 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-862350 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=docker: (24.606949275s)
E1202 16:01:46.434896  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/functional-169724/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:01:48.809337  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/addons-029941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:02:55.507215  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/functional-049660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:03:01.946030  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/skaffold-475198/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:03:01.952478  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/skaffold-475198/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:03:01.964381  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/skaffold-475198/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:03:01.985824  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/skaffold-475198/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:03:02.027120  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/skaffold-475198/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:03:02.108954  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/skaffold-475198/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:03:02.270942  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/skaffold-475198/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:03:02.592214  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/skaffold-475198/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:03:03.234397  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/skaffold-475198/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:03:04.515864  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/skaffold-475198/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:03:07.077405  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/skaffold-475198/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:03:12.199501  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/skaffold-475198/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:03:22.440902  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/skaffold-475198/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:03:42.923129  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/skaffold-475198/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-862350 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=docker
E1202 16:04:23.884458  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/skaffold-475198/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-862350 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (36.046313927s)
helpers_test.go:175: Cleaning up "cert-expiration-862350" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-862350
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-862350: (2.404911088s)
--- PASS: TestCertExpiration (243.06s)

                                                
                                    
x
+
TestDockerFlags (27.14s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-063226 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-063226 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (24.332441402s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-063226 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-063226 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-063226" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-063226
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-063226: (2.181583523s)
--- PASS: TestDockerFlags (27.14s)

                                                
                                    
x
+
TestForceSystemdFlag (24.55s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-469680 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-469680 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (21.518240133s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-469680 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-469680" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-469680
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-469680: (2.55713011s)
--- PASS: TestForceSystemdFlag (24.55s)

                                                
                                    
x
+
TestForceSystemdEnv (31.77s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-862906 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-862906 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (28.756545005s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-862906 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-862906" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-862906
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-862906: (2.586864054s)
--- PASS: TestForceSystemdEnv (31.77s)

                                                
                                    
x
+
TestErrorSpam/setup (24.8s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-307841 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-307841 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-307841 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-307841 --driver=docker  --container-runtime=docker: (24.794876769s)
--- PASS: TestErrorSpam/setup (24.80s)

                                                
                                    
x
+
TestErrorSpam/start (0.69s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-307841 --log_dir /tmp/nospam-307841 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-307841 --log_dir /tmp/nospam-307841 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-307841 --log_dir /tmp/nospam-307841 start --dry-run
--- PASS: TestErrorSpam/start (0.69s)

                                                
                                    
x
+
TestErrorSpam/status (0.97s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-307841 --log_dir /tmp/nospam-307841 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-307841 --log_dir /tmp/nospam-307841 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-307841 --log_dir /tmp/nospam-307841 status
--- PASS: TestErrorSpam/status (0.97s)

                                                
                                    
x
+
TestErrorSpam/pause (1.29s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-307841 --log_dir /tmp/nospam-307841 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-307841 --log_dir /tmp/nospam-307841 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-307841 --log_dir /tmp/nospam-307841 pause
--- PASS: TestErrorSpam/pause (1.29s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.37s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-307841 --log_dir /tmp/nospam-307841 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-307841 --log_dir /tmp/nospam-307841 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-307841 --log_dir /tmp/nospam-307841 unpause
--- PASS: TestErrorSpam/unpause (1.37s)

                                                
                                    
x
+
TestErrorSpam/stop (11.12s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-307841 --log_dir /tmp/nospam-307841 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-307841 --log_dir /tmp/nospam-307841 stop: (10.891077165s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-307841 --log_dir /tmp/nospam-307841 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-307841 --log_dir /tmp/nospam-307841 stop
--- PASS: TestErrorSpam/stop (11.12s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22021-563346/.minikube/files/etc/test/nested/copy/567092/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (69.46s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-049660 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-049660 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m9.461433701s)
--- PASS: TestFunctional/serial/StartWithProxy (69.46s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (50.68s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1202 15:16:00.986490  567092 config.go:182] Loaded profile config "functional-049660": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-049660 --alsologtostderr -v=8
E1202 15:16:48.809610  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/addons-029941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:16:48.816118  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/addons-029941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:16:48.827570  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/addons-029941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:16:48.849037  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/addons-029941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:16:48.890528  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/addons-029941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:16:48.972014  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/addons-029941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:16:49.133366  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/addons-029941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:16:49.455068  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/addons-029941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:16:50.096397  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/addons-029941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:16:51.378440  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/addons-029941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-049660 --alsologtostderr -v=8: (50.683256488s)
functional_test.go:678: soft start took 50.684042301s for "functional-049660" cluster.
I1202 15:16:51.670270  567092 config.go:182] Loaded profile config "functional-049660": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (50.68s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-049660 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 cache add registry.k8s.io/pause:latest
E1202 15:16:53.940034  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/addons-029941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.83s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-049660 /tmp/TestFunctionalserialCacheCmdcacheadd_local2006918611/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 cache add minikube-local-cache-test:functional-049660
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 cache delete minikube-local-cache-test:functional-049660
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-049660
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.83s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-049660 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (315.204429ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.43s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 kubectl -- --context functional-049660 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-049660 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (51.55s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-049660 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1202 15:16:59.062152  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/addons-029941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:17:09.303845  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/addons-029941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:17:29.785290  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/addons-029941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-049660 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (51.554476534s)
functional_test.go:776: restart took 51.55465047s for "functional-049660" cluster.
I1202 15:17:48.642058  567092 config.go:182] Loaded profile config "functional-049660": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (51.55s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-049660 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.14s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-049660 logs: (1.144438239s)
--- PASS: TestFunctional/serial/LogsCmd (1.14s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.16s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 logs --file /tmp/TestFunctionalserialLogsFileCmd1382769690/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-049660 logs --file /tmp/TestFunctionalserialLogsFileCmd1382769690/001/logs.txt: (1.160453123s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.16s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.25s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-049660 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-049660
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-049660: exit status 115 (378.707648ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31449 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-049660 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.25s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-049660 config get cpus: exit status 14 (90.964197ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-049660 config get cpus: exit status 14 (87.7056ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-049660 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-049660 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 617360: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.86s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-049660 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-049660 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (180.283383ms)

                                                
                                                
-- stdout --
	* [functional-049660] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22021
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22021-563346/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-563346/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 15:18:20.048318  616849 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:18:20.048591  616849 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:18:20.048599  616849 out.go:374] Setting ErrFile to fd 2...
	I1202 15:18:20.048604  616849 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:18:20.048867  616849 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-563346/.minikube/bin
	I1202 15:18:20.049412  616849 out.go:368] Setting JSON to false
	I1202 15:18:20.050627  616849 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":7249,"bootTime":1764681451,"procs":267,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 15:18:20.050725  616849 start.go:143] virtualization: kvm guest
	I1202 15:18:20.053573  616849 out.go:179] * [functional-049660] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 15:18:20.055149  616849 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 15:18:20.055155  616849 notify.go:221] Checking for updates...
	I1202 15:18:20.057562  616849 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 15:18:20.058757  616849 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-563346/kubeconfig
	I1202 15:18:20.060022  616849 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-563346/.minikube
	I1202 15:18:20.061120  616849 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 15:18:20.062149  616849 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 15:18:20.063736  616849 config.go:182] Loaded profile config "functional-049660": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1202 15:18:20.064327  616849 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 15:18:20.089814  616849 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 15:18:20.089966  616849 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 15:18:20.147627  616849 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-02 15:18:20.137870417 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 15:18:20.147747  616849 docker.go:319] overlay module found
	I1202 15:18:20.150352  616849 out.go:179] * Using the docker driver based on existing profile
	I1202 15:18:20.151570  616849 start.go:309] selected driver: docker
	I1202 15:18:20.151585  616849 start.go:927] validating driver "docker" against &{Name:functional-049660 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-049660 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 15:18:20.151662  616849 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 15:18:20.153555  616849 out.go:203] 
	W1202 15:18:20.154794  616849 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1202 15:18:20.156232  616849 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-049660 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-049660 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-049660 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (210.164589ms)

                                                
                                                
-- stdout --
	* [functional-049660] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22021
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22021-563346/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-563346/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 15:18:19.841314  616769 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:18:19.841435  616769 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:18:19.841447  616769 out.go:374] Setting ErrFile to fd 2...
	I1202 15:18:19.841453  616769 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:18:19.841837  616769 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-563346/.minikube/bin
	I1202 15:18:19.842362  616769 out.go:368] Setting JSON to false
	I1202 15:18:19.843519  616769 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":7249,"bootTime":1764681451,"procs":267,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 15:18:19.843587  616769 start.go:143] virtualization: kvm guest
	I1202 15:18:19.845774  616769 out.go:179] * [functional-049660] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1202 15:18:19.847367  616769 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 15:18:19.847396  616769 notify.go:221] Checking for updates...
	I1202 15:18:19.849859  616769 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 15:18:19.851260  616769 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-563346/kubeconfig
	I1202 15:18:19.852963  616769 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-563346/.minikube
	I1202 15:18:19.854253  616769 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 15:18:19.855637  616769 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 15:18:19.857312  616769 config.go:182] Loaded profile config "functional-049660": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1202 15:18:19.858238  616769 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 15:18:19.905303  616769 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 15:18:19.905484  616769 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 15:18:19.966423  616769 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-02 15:18:19.956532295 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 15:18:19.966537  616769 docker.go:319] overlay module found
	I1202 15:18:19.968569  616769 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1202 15:18:19.969967  616769 start.go:309] selected driver: docker
	I1202 15:18:19.969983  616769 start.go:927] validating driver "docker" against &{Name:functional-049660 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-049660 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 15:18:19.970104  616769 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 15:18:19.972247  616769 out.go:203] 
	W1202 15:18:19.973630  616769 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1202 15:18:19.974965  616769 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 status
I1202 15:18:18.702592  567092 detect.go:223] nested VM detected
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (18.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-049660 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-049660 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-j4vzd" [c51524f0-e129-4e42-8ac5-d3563d3a9681] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-j4vzd" [c51524f0-e129-4e42-8ac5-d3563d3a9681] Running
E1202 15:18:10.746942  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/addons-029941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 18.004138388s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:31244
functional_test.go:1680: http://192.168.49.2:31244: success! body:
Request served by hello-node-connect-7d85dfc575-j4vzd

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31244
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (18.76s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [8a1489ed-8eaf-4503-90d1-99b1455c331c] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004820622s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-049660 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-049660 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-049660 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-049660 apply -f testdata/storage-provisioner/pod.yaml
I1202 15:18:03.916543  567092 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [2f2206d9-24c6-4176-9d4b-297e9088ce0b] Pending
helpers_test.go:352: "sp-pod" [2f2206d9-24c6-4176-9d4b-297e9088ce0b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [2f2206d9-24c6-4176-9d4b-297e9088ce0b] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.003664889s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-049660 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-049660 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-049660 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [8d05b809-c56f-4017-ab4e-5dff09f2c257] Pending
helpers_test.go:352: "sp-pod" [8d05b809-c56f-4017-ab4e-5dff09f2c257] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004913055s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-049660 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.41s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 ssh -n functional-049660 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 cp functional-049660:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd834397463/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 ssh -n functional-049660 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 ssh -n functional-049660 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.22s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-049660 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-ccvbc" [f2a6b2ea-18ed-4a76-9a1a-4c2f17134895] Pending
helpers_test.go:352: "mysql-5bb876957f-ccvbc" [f2a6b2ea-18ed-4a76-9a1a-4c2f17134895] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-ccvbc" [f2a6b2ea-18ed-4a76-9a1a-4c2f17134895] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 18.00363464s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-049660 exec mysql-5bb876957f-ccvbc -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-049660 exec mysql-5bb876957f-ccvbc -- mysql -ppassword -e "show databases;": exit status 1 (136.59946ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1202 15:18:13.646957  567092 retry.go:31] will retry after 501.425259ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-049660 exec mysql-5bb876957f-ccvbc -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-049660 exec mysql-5bb876957f-ccvbc -- mysql -ppassword -e "show databases;": exit status 1 (143.235345ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1202 15:18:14.292743  567092 retry.go:31] will retry after 1.801460923s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-049660 exec mysql-5bb876957f-ccvbc -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-049660 exec mysql-5bb876957f-ccvbc -- mysql -ppassword -e "show databases;": exit status 1 (135.384655ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1202 15:18:16.230795  567092 retry.go:31] will retry after 2.307500668s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-049660 exec mysql-5bb876957f-ccvbc -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (23.40s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/567092/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 ssh "sudo cat /etc/test/nested/copy/567092/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/567092.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 ssh "sudo cat /etc/ssl/certs/567092.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/567092.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 ssh "sudo cat /usr/share/ca-certificates/567092.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/5670922.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 ssh "sudo cat /etc/ssl/certs/5670922.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/5670922.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 ssh "sudo cat /usr/share/ca-certificates/5670922.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.18s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-049660 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-049660 ssh "sudo systemctl is-active crio": exit status 1 (377.846552ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-049660 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-049660 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-049660 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 613056: os: process already finished
helpers_test.go:525: unable to kill pid 612697: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-049660 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-049660 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (17.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-049660 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [ccbaf364-dc76-4249-9492-96e50028cb97] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [ccbaf364-dc76-4249-9492-96e50028cb97] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 17.003815428s
I1202 15:18:14.365003  567092 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (17.31s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-049660 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.98.84.223 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-049660 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-049660 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-049660 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-lntg9" [e9a070d9-3d77-4be6-aac0-fc7c17fd26e2] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-lntg9" [e9a070d9-3d77-4be6-aac0-fc7c17fd26e2] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.004189866s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.15s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "358.53829ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "66.248278ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "368.828021ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "71.958157ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-049660 /tmp/TestFunctionalparallelMountCmdany-port717123902/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1764688698092698144" to /tmp/TestFunctionalparallelMountCmdany-port717123902/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1764688698092698144" to /tmp/TestFunctionalparallelMountCmdany-port717123902/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1764688698092698144" to /tmp/TestFunctionalparallelMountCmdany-port717123902/001/test-1764688698092698144
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-049660 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (325.853899ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1202 15:18:18.418864  567092 retry.go:31] will retry after 366.508183ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  2 15:18 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  2 15:18 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  2 15:18 test-1764688698092698144
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 ssh cat /mount-9p/test-1764688698092698144
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-049660 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [70c61a0c-d68c-4c30-9f7f-176b9d123fa5] Pending
helpers_test.go:352: "busybox-mount" [70c61a0c-d68c-4c30-9f7f-176b9d123fa5] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [70c61a0c-d68c-4c30-9f7f-176b9d123fa5] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [70c61a0c-d68c-4c30-9f7f-176b9d123fa5] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.00425405s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-049660 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-049660 /tmp/TestFunctionalparallelMountCmdany-port717123902/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-049660 service list: (1.750624199s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-049660 service list -o json: (1.754094687s)
functional_test.go:1504: Took "1.754205692s" to run "out/minikube-linux-amd64 -p functional-049660 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:32290
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-049660 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-049660
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-049660
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-049660 image ls --format short --alsologtostderr:
I1202 15:18:31.553949  621793 out.go:360] Setting OutFile to fd 1 ...
I1202 15:18:31.554293  621793 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:18:31.554305  621793 out.go:374] Setting ErrFile to fd 2...
I1202 15:18:31.554314  621793 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:18:31.554634  621793 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-563346/.minikube/bin
I1202 15:18:31.555433  621793 config.go:182] Loaded profile config "functional-049660": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1202 15:18:31.555569  621793 config.go:182] Loaded profile config "functional-049660": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1202 15:18:31.556173  621793 cli_runner.go:164] Run: docker container inspect functional-049660 --format={{.State.Status}}
I1202 15:18:31.580392  621793 ssh_runner.go:195] Run: systemctl --version
I1202 15:18:31.580450  621793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-049660
I1202 15:18:31.606359  621793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/22021-563346/.minikube/machines/functional-049660/id_rsa Username:docker}
I1202 15:18:31.713313  621793 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-049660 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ registry.k8s.io/etcd                        │ 3.6.5-0           │ a3e246e9556e9 │ 62.5MB │
│ docker.io/kubernetesui/metrics-scraper      │ <none>            │ 115053965e86b │ 43.8MB │
│ registry.k8s.io/pause                       │ 3.1               │ da86e6ba6ca19 │ 742kB  │
│ registry.k8s.io/pause                       │ latest            │ 350b164e7ae1d │ 240kB  │
│ registry.k8s.io/kube-scheduler              │ v1.34.2           │ 88320b5498ff2 │ 52.8MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.2           │ 8aa150647e88a │ 71.9MB │
│ registry.k8s.io/pause                       │ 3.10.1            │ cd073f4c5f6a8 │ 736kB  │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                │ 6e38f40d628db │ 31.5MB │
│ docker.io/library/minikube-local-cache-test │ functional-049660 │ 1d9108901c355 │ 30B    │
│ registry.k8s.io/kube-apiserver              │ v1.34.2           │ a5f569d49a979 │ 88MB   │
│ registry.k8s.io/kube-controller-manager     │ v1.34.2           │ 01e8bacf0f500 │ 74.9MB │
│ docker.io/library/mysql                     │ 5.7               │ 5107333e08a87 │ 501MB  │
│ docker.io/kicbase/echo-server               │ functional-049660 │ 9056ab77afb8e │ 4.94MB │
│ docker.io/kicbase/echo-server               │ latest            │ 9056ab77afb8e │ 4.94MB │
│ registry.k8s.io/pause                       │ 3.3               │ 0184c1613d929 │ 683kB  │
│ docker.io/library/nginx                     │ latest            │ 60adc2e137e75 │ 152MB  │
│ docker.io/library/nginx                     │ alpine            │ d4918ca78576a │ 52.8MB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1           │ 52546a367cc9e │ 75MB   │
│ docker.io/kubernetesui/dashboard            │ <none>            │ 07655ddf2eebe │ 246MB  │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc      │ 56cc512116c8f │ 4.4MB  │
└─────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-049660 image ls --format table --alsologtostderr:
I1202 15:18:31.811864  622041 out.go:360] Setting OutFile to fd 1 ...
I1202 15:18:31.812158  622041 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:18:31.812169  622041 out.go:374] Setting ErrFile to fd 2...
I1202 15:18:31.812173  622041 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:18:31.812394  622041 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-563346/.minikube/bin
I1202 15:18:31.812960  622041 config.go:182] Loaded profile config "functional-049660": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1202 15:18:31.813061  622041 config.go:182] Loaded profile config "functional-049660": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1202 15:18:31.813558  622041 cli_runner.go:164] Run: docker container inspect functional-049660 --format={{.State.Status}}
I1202 15:18:31.834907  622041 ssh_runner.go:195] Run: systemctl --version
I1202 15:18:31.834955  622041 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-049660
I1202 15:18:31.855952  622041 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/22021-563346/.minikube/machines/functional-049660/id_rsa Username:docker}
I1202 15:18:31.956970  622041 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-049660 image ls --format json --alsologtostderr:
[{"id":"8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"71900000"},{"id":"01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"74900000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"1d9108901c355c835984b233df74e509c186e623cbd6e599b2f05ce80cef078d","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-049660"],"size":"30"},{"id":"a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"88000000"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"75000000"},{"id":"5107333e08a87b836d4
8ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-049660","docker.io/kicbase/echo-server:latest"],"size":"4940000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"152000000"},{"id":"88320b5498ff2caef2e5b089fc2c49c81d
6529dcbba1481eb04badc3e40e5952","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"52800000"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"62500000"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"736000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":[],"repoTags":["doc
ker.io/library/nginx:alpine"],"size":"52800000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-049660 image ls --format json --alsologtostderr:
I1202 15:18:31.561007  621796 out.go:360] Setting OutFile to fd 1 ...
I1202 15:18:31.561148  621796 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:18:31.561161  621796 out.go:374] Setting ErrFile to fd 2...
I1202 15:18:31.561168  621796 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:18:31.561480  621796 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-563346/.minikube/bin
I1202 15:18:31.562276  621796 config.go:182] Loaded profile config "functional-049660": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1202 15:18:31.562412  621796 config.go:182] Loaded profile config "functional-049660": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1202 15:18:31.563048  621796 cli_runner.go:164] Run: docker container inspect functional-049660 --format={{.State.Status}}
I1202 15:18:31.586449  621796 ssh_runner.go:195] Run: systemctl --version
I1202 15:18:31.586513  621796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-049660
I1202 15:18:31.611123  621796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/22021-563346/.minikube/machines/functional-049660/id_rsa Username:docker}
I1202 15:18:31.714795  621796 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-049660 image ls --format yaml --alsologtostderr:
- id: 1d9108901c355c835984b233df74e509c186e623cbd6e599b2f05ce80cef078d
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-049660
size: "30"
- id: 01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "74900000"
- id: 88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "52800000"
- id: d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "52800000"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "75000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "88000000"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "62500000"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "736000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "152000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "71900000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-049660
- docker.io/kicbase/echo-server:latest
size: "4940000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-049660 image ls --format yaml --alsologtostderr:
I1202 15:18:31.557722  621795 out.go:360] Setting OutFile to fd 1 ...
I1202 15:18:31.557824  621795 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:18:31.557836  621795 out.go:374] Setting ErrFile to fd 2...
I1202 15:18:31.557842  621795 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:18:31.558154  621795 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-563346/.minikube/bin
I1202 15:18:31.558840  621795 config.go:182] Loaded profile config "functional-049660": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1202 15:18:31.558962  621795 config.go:182] Loaded profile config "functional-049660": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1202 15:18:31.559464  621795 cli_runner.go:164] Run: docker container inspect functional-049660 --format={{.State.Status}}
I1202 15:18:31.581490  621795 ssh_runner.go:195] Run: systemctl --version
I1202 15:18:31.581542  621795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-049660
I1202 15:18:31.604953  621795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/22021-563346/.minikube/machines/functional-049660/id_rsa Username:docker}
I1202 15:18:31.714346  621795 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-049660 ssh pgrep buildkitd: exit status 1 (314.322923ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 image build -t localhost/my-image:functional-049660 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-049660 image build -t localhost/my-image:functional-049660 testdata/build --alsologtostderr: (1.977197684s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-049660 image build -t localhost/my-image:functional-049660 testdata/build --alsologtostderr:
I1202 15:18:31.859246  622052 out.go:360] Setting OutFile to fd 1 ...
I1202 15:18:31.859345  622052 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:18:31.859353  622052 out.go:374] Setting ErrFile to fd 2...
I1202 15:18:31.859356  622052 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:18:31.859538  622052 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-563346/.minikube/bin
I1202 15:18:31.860095  622052 config.go:182] Loaded profile config "functional-049660": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1202 15:18:31.860830  622052 config.go:182] Loaded profile config "functional-049660": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1202 15:18:31.861382  622052 cli_runner.go:164] Run: docker container inspect functional-049660 --format={{.State.Status}}
I1202 15:18:31.879976  622052 ssh_runner.go:195] Run: systemctl --version
I1202 15:18:31.880028  622052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-049660
I1202 15:18:31.899284  622052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/22021-563346/.minikube/machines/functional-049660/id_rsa Username:docker}
I1202 15:18:31.998851  622052 build_images.go:162] Building image from path: /tmp/build.2149455473.tar
I1202 15:18:31.998948  622052 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1202 15:18:32.007507  622052 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2149455473.tar
I1202 15:18:32.011425  622052 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2149455473.tar: stat -c "%s %y" /var/lib/minikube/build/build.2149455473.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2149455473.tar': No such file or directory
I1202 15:18:32.011458  622052 ssh_runner.go:362] scp /tmp/build.2149455473.tar --> /var/lib/minikube/build/build.2149455473.tar (3072 bytes)
I1202 15:18:32.030629  622052 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2149455473
I1202 15:18:32.038809  622052 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2149455473 -xf /var/lib/minikube/build/build.2149455473.tar
I1202 15:18:32.047966  622052 docker.go:361] Building image: /var/lib/minikube/build/build.2149455473
I1202 15:18:32.048053  622052 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-049660 /var/lib/minikube/build/build.2149455473
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.0s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:b6395123bb42285ac32ddc1daf11a7d6d6213cd0897f00c3267ae8d693c86b0d done
#8 naming to localhost/my-image:functional-049660 done
#8 DONE 0.0s
I1202 15:18:33.742924  622052 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-049660 /var/lib/minikube/build/build.2149455473: (1.694844702s)
I1202 15:18:33.743009  622052 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2149455473
I1202 15:18:33.751946  622052 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2149455473.tar
I1202 15:18:33.760342  622052 build_images.go:218] Built localhost/my-image:functional-049660 from /tmp/build.2149455473.tar
I1202 15:18:33.760384  622052 build_images.go:134] succeeded building to: functional-049660
I1202 15:18:33.760390  622052 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-049660
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-049660 /tmp/TestFunctionalparallelMountCmdspecific-port2672235013/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-049660 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (366.794386ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1202 15:18:26.465287  567092 retry.go:31] will retry after 602.259821ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-049660 /tmp/TestFunctionalparallelMountCmdspecific-port2672235013/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-049660 ssh "sudo umount -f /mount-9p": exit status 1 (325.481026ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-049660 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-049660 /tmp/TestFunctionalparallelMountCmdspecific-port2672235013/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 image load --daemon kicbase/echo-server:functional-049660 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32290
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 version --short
2025/12/02 15:18:29 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 image load --daemon kicbase/echo-server:functional-049660 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:514: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-049660 docker-env) && out/minikube-linux-amd64 status -p functional-049660"
functional_test.go:537: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-049660 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-049660 /tmp/TestFunctionalparallelMountCmdVerifyCleanup379276031/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-049660 /tmp/TestFunctionalparallelMountCmdVerifyCleanup379276031/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-049660 /tmp/TestFunctionalparallelMountCmdVerifyCleanup379276031/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-049660 ssh "findmnt -T" /mount1: exit status 1 (402.349973ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1202 15:18:28.728029  567092 retry.go:31] will retry after 506.94405ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-049660 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-049660 /tmp/TestFunctionalparallelMountCmdVerifyCleanup379276031/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-049660 /tmp/TestFunctionalparallelMountCmdVerifyCleanup379276031/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-049660 /tmp/TestFunctionalparallelMountCmdVerifyCleanup379276031/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-049660
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 image load --daemon kicbase/echo-server:functional-049660 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 image save kicbase/echo-server:functional-049660 /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 image rm kicbase/echo-server:functional-049660 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 image load /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-049660
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-049660 image save --daemon kicbase/echo-server:functional-049660 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-049660
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.38s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-049660
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-049660
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-049660
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22021-563346/.minikube/files/etc/test/nested/copy/567092/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (69.03s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-169724 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0
E1202 15:19:32.670709  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/addons-029941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-169724 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0: (1m9.030596791s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (69.03s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (51.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1202 15:19:46.375793  567092 config.go:182] Loaded profile config "functional-169724": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-169724 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-169724 --alsologtostderr -v=8: (51.40060766s)
functional_test.go:678: soft start took 51.402899607s for "functional-169724" cluster.
I1202 15:20:37.779594  567092 config.go:182] Loaded profile config "functional-169724": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (51.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-169724 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (2.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (2.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (0.76s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-169724 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach3888825820/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 cache add minikube-local-cache-test:functional-169724
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 cache delete minikube-local-cache-test:functional-169724
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-169724
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (0.76s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-169724 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (309.224181ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 kubectl -- --context functional-169724 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-169724 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (54.61s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-169724 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-169724 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (54.612767705s)
functional_test.go:776: restart took 54.612886235s for "functional-169724" cluster.
I1202 15:21:37.701174  567092 config.go:182] Loaded profile config "functional-169724": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (54.61s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-169724 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-169724 logs: (1.057643571s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs1031610612/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-169724 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs1031610612/001/logs.txt: (1.044151924s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.7s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-169724 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-169724
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-169724: exit status 115 (363.064672ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30464 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-169724 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-169724 delete -f testdata/invalidsvc.yaml: (1.166596641s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.70s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.57s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-169724 config get cpus: exit status 14 (84.743869ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-169724 config get cpus: exit status 14 (99.924049ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.57s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-169724 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-169724 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0: exit status 23 (187.765371ms)

                                                
                                                
-- stdout --
	* [functional-169724] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22021
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22021-563346/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-563346/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 15:21:47.123745  640922 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:21:47.124011  640922 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:21:47.124021  640922 out.go:374] Setting ErrFile to fd 2...
	I1202 15:21:47.124026  640922 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:21:47.124361  640922 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-563346/.minikube/bin
	I1202 15:21:47.124882  640922 out.go:368] Setting JSON to false
	I1202 15:21:47.126022  640922 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":7456,"bootTime":1764681451,"procs":238,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 15:21:47.126118  640922 start.go:143] virtualization: kvm guest
	I1202 15:21:47.127574  640922 out.go:179] * [functional-169724] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 15:21:47.128943  640922 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 15:21:47.128995  640922 notify.go:221] Checking for updates...
	I1202 15:21:47.131135  640922 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 15:21:47.132409  640922 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-563346/kubeconfig
	I1202 15:21:47.133651  640922 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-563346/.minikube
	I1202 15:21:47.137645  640922 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 15:21:47.139121  640922 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 15:21:47.140827  640922 config.go:182] Loaded profile config "functional-169724": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1202 15:21:47.141404  640922 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 15:21:47.166139  640922 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 15:21:47.166274  640922 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 15:21:47.232252  640922 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-02 15:21:47.220508442 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 15:21:47.232400  640922 docker.go:319] overlay module found
	I1202 15:21:47.235281  640922 out.go:179] * Using the docker driver based on existing profile
	I1202 15:21:47.236498  640922 start.go:309] selected driver: docker
	I1202 15:21:47.236512  640922 start.go:927] validating driver "docker" against &{Name:functional-169724 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-169724 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 15:21:47.236603  640922 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 15:21:47.239327  640922 out.go:203] 
	W1202 15:21:47.240791  640922 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1202 15:21:47.243019  640922 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-169724 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-169724 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-169724 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0: exit status 23 (223.157672ms)

                                                
                                                
-- stdout --
	* [functional-169724] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22021
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22021-563346/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-563346/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 15:21:46.397546  640412 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:21:46.397699  640412 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:21:46.397711  640412 out.go:374] Setting ErrFile to fd 2...
	I1202 15:21:46.397717  640412 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:21:46.398171  640412 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-563346/.minikube/bin
	I1202 15:21:46.398790  640412 out.go:368] Setting JSON to false
	I1202 15:21:46.400215  640412 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":7455,"bootTime":1764681451,"procs":235,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 15:21:46.400301  640412 start.go:143] virtualization: kvm guest
	I1202 15:21:46.402165  640412 out.go:179] * [functional-169724] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1202 15:21:46.403696  640412 out.go:179]   - MINIKUBE_LOCATION=22021
	I1202 15:21:46.403692  640412 notify.go:221] Checking for updates...
	I1202 15:21:46.406115  640412 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 15:21:46.407409  640412 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22021-563346/kubeconfig
	I1202 15:21:46.408502  640412 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-563346/.minikube
	I1202 15:21:46.409743  640412 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 15:21:46.410883  640412 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 15:21:46.412635  640412 config.go:182] Loaded profile config "functional-169724": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1202 15:21:46.413476  640412 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 15:21:46.443683  640412 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1202 15:21:46.443854  640412 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 15:21:46.528256  640412 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-02 15:21:46.515101503 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 15:21:46.528402  640412 docker.go:319] overlay module found
	I1202 15:21:46.531933  640412 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1202 15:21:46.533129  640412 start.go:309] selected driver: docker
	I1202 15:21:46.533156  640412 start.go:927] validating driver "docker" against &{Name:functional-169724 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-169724 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 15:21:46.533312  640412 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 15:21:46.536409  640412 out.go:203] 
	W1202 15:21:46.537746  640412 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1202 15:21:46.542281  640412 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (1.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (1.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (10.6s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-169724 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-169724 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-9f67c86d4-j55l8" [2ff55480-ea00-41bf-a25a-66d73af5f8ba] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-9f67c86d4-j55l8" [2ff55480-ea00-41bf-a25a-66d73af5f8ba] Running
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.004321198s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:30895
functional_test.go:1680: http://192.168.49.2:30895: success! body:
Request served by hello-node-connect-9f67c86d4-j55l8

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:30895
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (10.60s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (24.64s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [fb71a451-8291-4c4f-b613-07e4ad4ba389] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003444869s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-169724 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-169724 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-169724 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-169724 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [1ce3aec8-f9ec-4111-bd99-3898247dffa7] Pending
helpers_test.go:352: "sp-pod" [1ce3aec8-f9ec-4111-bd99-3898247dffa7] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [1ce3aec8-f9ec-4111-bd99-3898247dffa7] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.004319286s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-169724 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-169724 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-169724 apply -f testdata/storage-provisioner/pod.yaml
I1202 15:22:05.059973  567092 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [d8340d35-b3d6-4326-b084-210d089189c8] Pending
helpers_test.go:352: "sp-pod" [d8340d35-b3d6-4326-b084-210d089189c8] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003663441s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-169724 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (24.64s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.72s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.72s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.87s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 ssh -n functional-169724 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 cp functional-169724:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp3581448417/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 ssh -n functional-169724 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 ssh -n functional-169724 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.87s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/567092/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 ssh "sudo cat /etc/test/nested/copy/567092/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.83s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/567092.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 ssh "sudo cat /etc/ssl/certs/567092.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/567092.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 ssh "sudo cat /usr/share/ca-certificates/567092.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/5670922.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 ssh "sudo cat /etc/ssl/certs/5670922.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/5670922.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 ssh "sudo cat /usr/share/ca-certificates/5670922.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.83s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-169724 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-169724 ssh "sudo systemctl is-active crio": exit status 1 (283.061458ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv/bash (1.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv/bash
functional_test.go:514: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-169724 docker-env) && out/minikube-linux-amd64 status -p functional-169724"
functional_test.go:537: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-169724 docker-env) && docker images"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv/bash (1.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (8.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-169724 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1370597718/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1764688904882666883" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1370597718/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1764688904882666883" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1370597718/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1764688904882666883" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1370597718/001/test-1764688904882666883
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-169724 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (377.452618ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1202 15:21:45.260486  567092 retry.go:31] will retry after 412.39246ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  2 15:21 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  2 15:21 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  2 15:21 test-1764688904882666883
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 ssh cat /mount-9p/test-1764688904882666883
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-169724 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [4e56a838-32e9-49bc-a324-b7c7090b669c] Pending
helpers_test.go:352: "busybox-mount" [4e56a838-32e9-49bc-a324-b7c7090b669c] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
E1202 15:21:48.809791  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/addons-029941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox-mount" [4e56a838-32e9-49bc-a324-b7c7090b669c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [4e56a838-32e9-49bc-a324-b7c7090b669c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003726674s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-169724 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-169724 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1370597718/001:/mount-9p --alsologtostderr -v=1] ...
I1202 15:21:52.963159  567092 detect.go:223] nested VM detected
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (8.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "472.693532ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "70.896949ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.6s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-169724 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-169724 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-169724 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 640269: os: process already finished
helpers_test.go:525: unable to kill pid 639829: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-169724 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.60s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "439.253341ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "68.185739ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-169724 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (8.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-169724 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [62f56390-864f-4fa9-8613-ec806d8f8d84] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [62f56390-864f-4fa9-8613-ec806d8f8d84] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.004134838s
I1202 15:21:54.697569  567092 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (8.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.99s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-169724 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1280567478/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-169724 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (302.905512ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1202 15:21:53.303126  567092 retry.go:31] will retry after 481.524291ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-169724 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1280567478/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-169724 ssh "sudo umount -f /mount-9p": exit status 1 (347.76924ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-169724 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-169724 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1280567478/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.99s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-169724 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.102.106.161 is working!
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-169724 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (9.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-169724 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-169724 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-5758569b79-l9dzq" [5a2c5e59-3f17-4414-b8e6-0b845e77e5e2] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-5758569b79-l9dzq" [5a2c5e59-3f17-4414-b8e6-0b845e77e5e2] Running
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.003918322s
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (9.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.84s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-169724 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3017012784/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-169724 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3017012784/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-169724 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3017012784/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-169724 ssh "findmnt -T" /mount1: exit status 1 (441.13126ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1202 15:21:55.435868  567092 retry.go:31] will retry after 281.228781ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-169724 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-169724 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3017012784/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-169724 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3017012784/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-169724 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3017012784/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.84s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (1.74s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-169724 service list: (1.741894601s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (1.74s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (1.72s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-169724 service list -o json: (1.71879016s)
functional_test.go:1504: Took "1.718929319s" to run "out/minikube-linux-amd64 -p functional-169724 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (1.72s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:31132
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 service hello-node --url --format={{.IP}}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31132
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-169724 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-169724
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-169724
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-169724 image ls --format short --alsologtostderr:
I1202 15:22:16.358456  648275 out.go:360] Setting OutFile to fd 1 ...
I1202 15:22:16.358534  648275 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:22:16.358541  648275 out.go:374] Setting ErrFile to fd 2...
I1202 15:22:16.358545  648275 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:22:16.358746  648275 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-563346/.minikube/bin
I1202 15:22:16.359331  648275 config.go:182] Loaded profile config "functional-169724": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1202 15:22:16.359447  648275 config.go:182] Loaded profile config "functional-169724": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1202 15:22:16.359884  648275 cli_runner.go:164] Run: docker container inspect functional-169724 --format={{.State.Status}}
I1202 15:22:16.379334  648275 ssh_runner.go:195] Run: systemctl --version
I1202 15:22:16.379401  648275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-169724
I1202 15:22:16.398823  648275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/22021-563346/.minikube/machines/functional-169724/id_rsa Username:docker}
I1202 15:22:16.499971  648275 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-169724 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                       │ latest            │ 350b164e7ae1d │ 240kB  │
│ registry.k8s.io/kube-scheduler              │ v1.35.0-beta.0    │ 7bb6219ddab95 │ 51.7MB │
│ registry.k8s.io/kube-proxy                  │ v1.35.0-beta.0    │ 8a4ded35a3eb1 │ 70.7MB │
│ registry.k8s.io/kube-controller-manager     │ v1.35.0-beta.0    │ 45f3cc72d235f │ 75.8MB │
│ docker.io/library/nginx                     │ latest            │ 60adc2e137e75 │ 152MB  │
│ registry.k8s.io/coredns/coredns             │ v1.13.1           │ aa5e3ebc0dfed │ 78.1MB │
│ registry.k8s.io/etcd                        │ 3.6.5-0           │ a3e246e9556e9 │ 62.5MB │
│ registry.k8s.io/kube-apiserver              │ v1.35.0-beta.0    │ aa9d02839d8de │ 89.7MB │
│ docker.io/kicbase/echo-server               │ functional-169724 │ 9056ab77afb8e │ 4.94MB │
│ docker.io/kicbase/echo-server               │ latest            │ 9056ab77afb8e │ 4.94MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/pause                       │ 3.3               │ 0184c1613d929 │ 683kB  │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc      │ 56cc512116c8f │ 4.4MB  │
│ docker.io/library/minikube-local-cache-test │ functional-169724 │ 1d9108901c355 │ 30B    │
│ registry.k8s.io/pause                       │ 3.10.1            │ cd073f4c5f6a8 │ 736kB  │
│ docker.io/library/nginx                     │ alpine            │ d4918ca78576a │ 52.8MB │
│ docker.io/kubernetesui/metrics-scraper      │ <none>            │ 115053965e86b │ 43.8MB │
│ registry.k8s.io/pause                       │ 3.1               │ da86e6ba6ca19 │ 742kB  │
└─────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-169724 image ls --format table --alsologtostderr:
I1202 15:22:16.865620  648532 out.go:360] Setting OutFile to fd 1 ...
I1202 15:22:16.865932  648532 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:22:16.865943  648532 out.go:374] Setting ErrFile to fd 2...
I1202 15:22:16.865947  648532 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:22:16.866147  648532 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-563346/.minikube/bin
I1202 15:22:16.866894  648532 config.go:182] Loaded profile config "functional-169724": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1202 15:22:16.867033  648532 config.go:182] Loaded profile config "functional-169724": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1202 15:22:16.867781  648532 cli_runner.go:164] Run: docker container inspect functional-169724 --format={{.State.Status}}
I1202 15:22:16.889527  648532 ssh_runner.go:195] Run: systemctl --version
I1202 15:22:16.889592  648532 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-169724
I1202 15:22:16.910926  648532 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/22021-563346/.minikube/machines/functional-169724/id_rsa Username:docker}
I1202 15:22:17.012941  648532 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-169724 image ls --format json --alsologtostderr:
[{"id":"d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"52800000"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"78100000"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"62500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6
e5253bc","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"75800000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"89700000"},{"id":"7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"51700000"},{"id":"8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"70700000"},{"id":"60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"152000000"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDi
gests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"736000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-169724","docker.io/kicbase/echo-server:latest"],"size":"4940000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"1d9108901c355c835984b233df74e509c186e623cbd6e599b2f05ce80cef078d","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-169724"],"size":"30"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-169724 image ls --format json --alsologtostderr:
I1202 15:22:16.609028  648372 out.go:360] Setting OutFile to fd 1 ...
I1202 15:22:16.609334  648372 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:22:16.609345  648372 out.go:374] Setting ErrFile to fd 2...
I1202 15:22:16.609352  648372 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:22:16.609567  648372 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-563346/.minikube/bin
I1202 15:22:16.610119  648372 config.go:182] Loaded profile config "functional-169724": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1202 15:22:16.610267  648372 config.go:182] Loaded profile config "functional-169724": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1202 15:22:16.610748  648372 cli_runner.go:164] Run: docker container inspect functional-169724 --format={{.State.Status}}
I1202 15:22:16.629794  648372 ssh_runner.go:195] Run: systemctl --version
I1202 15:22:16.629865  648372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-169724
I1202 15:22:16.651958  648372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/22021-563346/.minikube/machines/functional-169724/id_rsa Username:docker}
I1202 15:22:16.759559  648372 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 image ls --format yaml --alsologtostderr
E1202 15:22:16.512370  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/addons-029941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-169724 image ls --format yaml --alsologtostderr:
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "736000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 1d9108901c355c835984b233df74e509c186e623cbd6e599b2f05ce80cef078d
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-169724
size: "30"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "78100000"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "62500000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-169724
- docker.io/kicbase/echo-server:latest
size: "4940000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "89700000"
- id: 7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "51700000"
- id: 8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "70700000"
- id: 45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "75800000"
- id: d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "52800000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "152000000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-169724 image ls --format yaml --alsologtostderr:
I1202 15:22:16.358030  648274 out.go:360] Setting OutFile to fd 1 ...
I1202 15:22:16.358327  648274 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:22:16.358338  648274 out.go:374] Setting ErrFile to fd 2...
I1202 15:22:16.358347  648274 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:22:16.358563  648274 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-563346/.minikube/bin
I1202 15:22:16.359219  648274 config.go:182] Loaded profile config "functional-169724": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1202 15:22:16.359345  648274 config.go:182] Loaded profile config "functional-169724": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1202 15:22:16.359829  648274 cli_runner.go:164] Run: docker container inspect functional-169724 --format={{.State.Status}}
I1202 15:22:16.379010  648274 ssh_runner.go:195] Run: systemctl --version
I1202 15:22:16.379075  648274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-169724
I1202 15:22:16.399140  648274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/22021-563346/.minikube/machines/functional-169724/id_rsa Username:docker}
I1202 15:22:16.499970  648274 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (2.7s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-169724 ssh pgrep buildkitd: exit status 1 (288.852992ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 image build -t localhost/my-image:functional-169724 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-169724 image build -t localhost/my-image:functional-169724 testdata/build --alsologtostderr: (2.16597382s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-169724 image build -t localhost/my-image:functional-169724 testdata/build --alsologtostderr:
I1202 15:22:16.905916  648543 out.go:360] Setting OutFile to fd 1 ...
I1202 15:22:16.906263  648543 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:22:16.906278  648543 out.go:374] Setting ErrFile to fd 2...
I1202 15:22:16.906285  648543 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 15:22:16.906603  648543 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-563346/.minikube/bin
I1202 15:22:16.907470  648543 config.go:182] Loaded profile config "functional-169724": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1202 15:22:16.908276  648543 config.go:182] Loaded profile config "functional-169724": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1202 15:22:16.908800  648543 cli_runner.go:164] Run: docker container inspect functional-169724 --format={{.State.Status}}
I1202 15:22:16.930848  648543 ssh_runner.go:195] Run: systemctl --version
I1202 15:22:16.930914  648543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-169724
I1202 15:22:16.950270  648543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/22021-563346/.minikube/machines/functional-169724/id_rsa Username:docker}
I1202 15:22:17.053614  648543 build_images.go:162] Building image from path: /tmp/build.4075357124.tar
I1202 15:22:17.053732  648543 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1202 15:22:17.063829  648543 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4075357124.tar
I1202 15:22:17.069283  648543 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4075357124.tar: stat -c "%s %y" /var/lib/minikube/build/build.4075357124.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4075357124.tar': No such file or directory
I1202 15:22:17.069327  648543 ssh_runner.go:362] scp /tmp/build.4075357124.tar --> /var/lib/minikube/build/build.4075357124.tar (3072 bytes)
I1202 15:22:17.089957  648543 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4075357124
I1202 15:22:17.098698  648543 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4075357124 -xf /var/lib/minikube/build/build.4075357124.tar
I1202 15:22:17.107943  648543 docker.go:361] Building image: /var/lib/minikube/build/build.4075357124
I1202 15:22:17.108014  648543 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-169724 /var/lib/minikube/build/build.4075357124
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.1s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:3931985e4c78888d66da560555c75b58bc493cdd8d395a2c564c8b9f5b6b8381 done
#8 naming to localhost/my-image:functional-169724 done
#8 DONE 0.0s
I1202 15:22:18.964726  648543 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-169724 /var/lib/minikube/build/build.4075357124: (1.856668945s)
I1202 15:22:18.964823  648543 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4075357124
I1202 15:22:18.973370  648543 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4075357124.tar
I1202 15:22:18.981079  648543 build_images.go:218] Built localhost/my-image:functional-169724 from /tmp/build.4075357124.tar
I1202 15:22:18.981109  648543 build_images.go:134] succeeded building to: functional-169724
I1202 15:22:18.981114  648543 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 image ls
E1202 15:22:55.506850  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/functional-049660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:22:55.513327  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/functional-049660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:22:55.524844  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/functional-049660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:22:55.546332  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/functional-049660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:22:55.588142  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/functional-049660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:22:55.669610  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/functional-049660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:22:55.831263  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/functional-049660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:22:56.152991  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/functional-049660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:22:56.795319  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/functional-049660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:22:58.077000  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/functional-049660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:23:00.638689  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/functional-049660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:23:05.760602  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/functional-049660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:23:16.002788  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/functional-049660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:23:36.484755  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/functional-049660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:24:17.446502  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/functional-049660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:25:39.367958  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/functional-049660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:26:48.809123  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/addons-029941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (2.70s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-169724
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 image load --daemon kicbase/echo-server:functional-169724 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.91s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 image load --daemon kicbase/echo-server:functional-169724 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.91s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-169724
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 image load --daemon kicbase/echo-server:functional-169724 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 image save kicbase/echo-server:functional-169724 /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 image rm kicbase/echo-server:functional-169724 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.64s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 image load /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.64s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-169724
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-169724 image save --daemon kicbase/echo-server:functional-169724 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-169724
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-169724
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-169724
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-169724
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (137.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker
E1202 15:32:55.510915  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/functional-049660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:33:11.874025  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/addons-029941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-825681 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker: (2m16.734735135s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (137.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-825681 kubectl -- rollout status deployment/busybox: (2.801814588s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 kubectl -- exec busybox-7b57f96db7-g54x6 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 kubectl -- exec busybox-7b57f96db7-jk45w -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 kubectl -- exec busybox-7b57f96db7-ns7cn -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 kubectl -- exec busybox-7b57f96db7-g54x6 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 kubectl -- exec busybox-7b57f96db7-jk45w -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 kubectl -- exec busybox-7b57f96db7-ns7cn -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 kubectl -- exec busybox-7b57f96db7-g54x6 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 kubectl -- exec busybox-7b57f96db7-jk45w -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 kubectl -- exec busybox-7b57f96db7-ns7cn -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 kubectl -- exec busybox-7b57f96db7-g54x6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 kubectl -- exec busybox-7b57f96db7-g54x6 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 kubectl -- exec busybox-7b57f96db7-jk45w -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 kubectl -- exec busybox-7b57f96db7-jk45w -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 kubectl -- exec busybox-7b57f96db7-ns7cn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 kubectl -- exec busybox-7b57f96db7-ns7cn -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (33.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-825681 node add --alsologtostderr -v 5: (32.993717893s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (33.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-825681 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (18.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 cp testdata/cp-test.txt ha-825681:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 ssh -n ha-825681 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 cp ha-825681:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2570814740/001/cp-test_ha-825681.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 ssh -n ha-825681 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 cp ha-825681:/home/docker/cp-test.txt ha-825681-m02:/home/docker/cp-test_ha-825681_ha-825681-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 ssh -n ha-825681 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 ssh -n ha-825681-m02 "sudo cat /home/docker/cp-test_ha-825681_ha-825681-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 cp ha-825681:/home/docker/cp-test.txt ha-825681-m03:/home/docker/cp-test_ha-825681_ha-825681-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 ssh -n ha-825681 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 ssh -n ha-825681-m03 "sudo cat /home/docker/cp-test_ha-825681_ha-825681-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 cp ha-825681:/home/docker/cp-test.txt ha-825681-m04:/home/docker/cp-test_ha-825681_ha-825681-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 ssh -n ha-825681 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 ssh -n ha-825681-m04 "sudo cat /home/docker/cp-test_ha-825681_ha-825681-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 cp testdata/cp-test.txt ha-825681-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 ssh -n ha-825681-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 cp ha-825681-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2570814740/001/cp-test_ha-825681-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 ssh -n ha-825681-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 cp ha-825681-m02:/home/docker/cp-test.txt ha-825681:/home/docker/cp-test_ha-825681-m02_ha-825681.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 ssh -n ha-825681-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 ssh -n ha-825681 "sudo cat /home/docker/cp-test_ha-825681-m02_ha-825681.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 cp ha-825681-m02:/home/docker/cp-test.txt ha-825681-m03:/home/docker/cp-test_ha-825681-m02_ha-825681-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 ssh -n ha-825681-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 ssh -n ha-825681-m03 "sudo cat /home/docker/cp-test_ha-825681-m02_ha-825681-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 cp ha-825681-m02:/home/docker/cp-test.txt ha-825681-m04:/home/docker/cp-test_ha-825681-m02_ha-825681-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 ssh -n ha-825681-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 ssh -n ha-825681-m04 "sudo cat /home/docker/cp-test_ha-825681-m02_ha-825681-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 cp testdata/cp-test.txt ha-825681-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 ssh -n ha-825681-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 cp ha-825681-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2570814740/001/cp-test_ha-825681-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 ssh -n ha-825681-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 cp ha-825681-m03:/home/docker/cp-test.txt ha-825681:/home/docker/cp-test_ha-825681-m03_ha-825681.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 ssh -n ha-825681-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 ssh -n ha-825681 "sudo cat /home/docker/cp-test_ha-825681-m03_ha-825681.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 cp ha-825681-m03:/home/docker/cp-test.txt ha-825681-m02:/home/docker/cp-test_ha-825681-m03_ha-825681-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 ssh -n ha-825681-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 ssh -n ha-825681-m02 "sudo cat /home/docker/cp-test_ha-825681-m03_ha-825681-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 cp ha-825681-m03:/home/docker/cp-test.txt ha-825681-m04:/home/docker/cp-test_ha-825681-m03_ha-825681-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 ssh -n ha-825681-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 ssh -n ha-825681-m04 "sudo cat /home/docker/cp-test_ha-825681-m03_ha-825681-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 cp testdata/cp-test.txt ha-825681-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 ssh -n ha-825681-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 cp ha-825681-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2570814740/001/cp-test_ha-825681-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 ssh -n ha-825681-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 cp ha-825681-m04:/home/docker/cp-test.txt ha-825681:/home/docker/cp-test_ha-825681-m04_ha-825681.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 ssh -n ha-825681-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 ssh -n ha-825681 "sudo cat /home/docker/cp-test_ha-825681-m04_ha-825681.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 cp ha-825681-m04:/home/docker/cp-test.txt ha-825681-m02:/home/docker/cp-test_ha-825681-m04_ha-825681-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 ssh -n ha-825681-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 ssh -n ha-825681-m02 "sudo cat /home/docker/cp-test_ha-825681-m04_ha-825681-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 cp ha-825681-m04:/home/docker/cp-test.txt ha-825681-m03:/home/docker/cp-test_ha-825681-m04_ha-825681-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 ssh -n ha-825681-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 ssh -n ha-825681-m03 "sudo cat /home/docker/cp-test_ha-825681-m04_ha-825681-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (18.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-825681 node stop m02 --alsologtostderr -v 5: (10.950754365s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-825681 status --alsologtostderr -v 5: exit status 7 (751.037885ms)

                                                
                                                
-- stdout --
	ha-825681
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-825681-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-825681-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-825681-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 15:35:32.138232  680497 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:35:32.138507  680497 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:35:32.138517  680497 out.go:374] Setting ErrFile to fd 2...
	I1202 15:35:32.138522  680497 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:35:32.138804  680497 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-563346/.minikube/bin
	I1202 15:35:32.139066  680497 out.go:368] Setting JSON to false
	I1202 15:35:32.139096  680497 mustload.go:66] Loading cluster: ha-825681
	I1202 15:35:32.139197  680497 notify.go:221] Checking for updates...
	I1202 15:35:32.139586  680497 config.go:182] Loaded profile config "ha-825681": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1202 15:35:32.139603  680497 status.go:174] checking status of ha-825681 ...
	I1202 15:35:32.140156  680497 cli_runner.go:164] Run: docker container inspect ha-825681 --format={{.State.Status}}
	I1202 15:35:32.160331  680497 status.go:371] ha-825681 host status = "Running" (err=<nil>)
	I1202 15:35:32.160354  680497 host.go:66] Checking if "ha-825681" exists ...
	I1202 15:35:32.160593  680497 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-825681
	I1202 15:35:32.180415  680497 host.go:66] Checking if "ha-825681" exists ...
	I1202 15:35:32.180721  680497 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 15:35:32.180762  680497 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-825681
	I1202 15:35:32.199111  680497 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33189 SSHKeyPath:/home/jenkins/minikube-integration/22021-563346/.minikube/machines/ha-825681/id_rsa Username:docker}
	I1202 15:35:32.297960  680497 ssh_runner.go:195] Run: systemctl --version
	I1202 15:35:32.304446  680497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 15:35:32.317778  680497 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 15:35:32.381982  680497 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:6 ContainersRunning:3 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-02 15:35:32.371478543 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 15:35:32.382781  680497 kubeconfig.go:125] found "ha-825681" server: "https://192.168.49.254:8443"
	I1202 15:35:32.382823  680497 api_server.go:166] Checking apiserver status ...
	I1202 15:35:32.382869  680497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 15:35:32.396217  680497 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2159/cgroup
	W1202 15:35:32.404752  680497 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2159/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1202 15:35:32.404830  680497 ssh_runner.go:195] Run: ls
	I1202 15:35:32.408713  680497 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1202 15:35:32.413090  680497 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1202 15:35:32.413113  680497 status.go:463] ha-825681 apiserver status = Running (err=<nil>)
	I1202 15:35:32.413122  680497 status.go:176] ha-825681 status: &{Name:ha-825681 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 15:35:32.413139  680497 status.go:174] checking status of ha-825681-m02 ...
	I1202 15:35:32.413435  680497 cli_runner.go:164] Run: docker container inspect ha-825681-m02 --format={{.State.Status}}
	I1202 15:35:32.432520  680497 status.go:371] ha-825681-m02 host status = "Stopped" (err=<nil>)
	I1202 15:35:32.432543  680497 status.go:384] host is not running, skipping remaining checks
	I1202 15:35:32.432549  680497 status.go:176] ha-825681-m02 status: &{Name:ha-825681-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 15:35:32.432573  680497 status.go:174] checking status of ha-825681-m03 ...
	I1202 15:35:32.432838  680497 cli_runner.go:164] Run: docker container inspect ha-825681-m03 --format={{.State.Status}}
	I1202 15:35:32.452628  680497 status.go:371] ha-825681-m03 host status = "Running" (err=<nil>)
	I1202 15:35:32.452654  680497 host.go:66] Checking if "ha-825681-m03" exists ...
	I1202 15:35:32.452931  680497 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-825681-m03
	I1202 15:35:32.471225  680497 host.go:66] Checking if "ha-825681-m03" exists ...
	I1202 15:35:32.471488  680497 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 15:35:32.471525  680497 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-825681-m03
	I1202 15:35:32.490122  680497 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/22021-563346/.minikube/machines/ha-825681-m03/id_rsa Username:docker}
	I1202 15:35:32.591214  680497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 15:35:32.605994  680497 kubeconfig.go:125] found "ha-825681" server: "https://192.168.49.254:8443"
	I1202 15:35:32.606022  680497 api_server.go:166] Checking apiserver status ...
	I1202 15:35:32.606058  680497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 15:35:32.618272  680497 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2125/cgroup
	W1202 15:35:32.626910  680497 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2125/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1202 15:35:32.626984  680497 ssh_runner.go:195] Run: ls
	I1202 15:35:32.631332  680497 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1202 15:35:32.636160  680497 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1202 15:35:32.636191  680497 status.go:463] ha-825681-m03 apiserver status = Running (err=<nil>)
	I1202 15:35:32.636210  680497 status.go:176] ha-825681-m03 status: &{Name:ha-825681-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 15:35:32.636232  680497 status.go:174] checking status of ha-825681-m04 ...
	I1202 15:35:32.636518  680497 cli_runner.go:164] Run: docker container inspect ha-825681-m04 --format={{.State.Status}}
	I1202 15:35:32.654802  680497 status.go:371] ha-825681-m04 host status = "Running" (err=<nil>)
	I1202 15:35:32.654826  680497 host.go:66] Checking if "ha-825681-m04" exists ...
	I1202 15:35:32.655098  680497 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-825681-m04
	I1202 15:35:32.674626  680497 host.go:66] Checking if "ha-825681-m04" exists ...
	I1202 15:35:32.674916  680497 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 15:35:32.674974  680497 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-825681-m04
	I1202 15:35:32.694615  680497 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33204 SSHKeyPath:/home/jenkins/minikube-integration/22021-563346/.minikube/machines/ha-825681-m04/id_rsa Username:docker}
	I1202 15:35:32.793076  680497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 15:35:32.818871  680497 status.go:176] ha-825681-m04 status: &{Name:ha-825681-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (37.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-825681 node start m02 --alsologtostderr -v 5: (36.184739284s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-825681 status --alsologtostderr -v 5: (1.018361535s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (37.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (173.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-825681 stop --alsologtostderr -v 5: (33.972398319s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 start --wait true --alsologtostderr -v 5
E1202 15:36:46.435063  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/functional-169724/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:36:46.442136  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/functional-169724/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:36:46.453595  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/functional-169724/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:36:46.475054  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/functional-169724/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:36:46.518174  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/functional-169724/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:36:46.599798  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/functional-169724/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:36:46.761133  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/functional-169724/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:36:47.083030  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/functional-169724/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:36:47.725282  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/functional-169724/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:36:48.809093  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/addons-029941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:36:49.006789  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/functional-169724/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:36:51.568436  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/functional-169724/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:36:56.690788  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/functional-169724/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:37:06.932707  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/functional-169724/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:37:27.414346  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/functional-169724/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:37:55.512374  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/functional-049660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:38:08.376322  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/functional-169724/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-825681 start --wait true --alsologtostderr -v 5: (2m18.938130743s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (173.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-825681 node delete m03 --alsologtostderr -v 5: (8.891676788s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 stop --alsologtostderr -v 5
E1202 15:39:18.573105  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/functional-049660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:39:30.298406  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/functional-169724/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-825681 stop --alsologtostderr -v 5: (32.543195225s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-825681 status --alsologtostderr -v 5: exit status 7 (127.020104ms)

                                                
                                                
-- stdout --
	ha-825681
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-825681-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-825681-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 15:39:47.985806  710560 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:39:47.986076  710560 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:39:47.986084  710560 out.go:374] Setting ErrFile to fd 2...
	I1202 15:39:47.986088  710560 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:39:47.986311  710560 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-563346/.minikube/bin
	I1202 15:39:47.986481  710560 out.go:368] Setting JSON to false
	I1202 15:39:47.986521  710560 mustload.go:66] Loading cluster: ha-825681
	I1202 15:39:47.986676  710560 notify.go:221] Checking for updates...
	I1202 15:39:47.986878  710560 config.go:182] Loaded profile config "ha-825681": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1202 15:39:47.986894  710560 status.go:174] checking status of ha-825681 ...
	I1202 15:39:47.987399  710560 cli_runner.go:164] Run: docker container inspect ha-825681 --format={{.State.Status}}
	I1202 15:39:48.007074  710560 status.go:371] ha-825681 host status = "Stopped" (err=<nil>)
	I1202 15:39:48.007121  710560 status.go:384] host is not running, skipping remaining checks
	I1202 15:39:48.007140  710560 status.go:176] ha-825681 status: &{Name:ha-825681 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 15:39:48.007206  710560 status.go:174] checking status of ha-825681-m02 ...
	I1202 15:39:48.007515  710560 cli_runner.go:164] Run: docker container inspect ha-825681-m02 --format={{.State.Status}}
	I1202 15:39:48.026504  710560 status.go:371] ha-825681-m02 host status = "Stopped" (err=<nil>)
	I1202 15:39:48.026527  710560 status.go:384] host is not running, skipping remaining checks
	I1202 15:39:48.026533  710560 status.go:176] ha-825681-m02 status: &{Name:ha-825681-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 15:39:48.026568  710560 status.go:174] checking status of ha-825681-m04 ...
	I1202 15:39:48.026833  710560 cli_runner.go:164] Run: docker container inspect ha-825681-m04 --format={{.State.Status}}
	I1202 15:39:48.045809  710560 status.go:371] ha-825681-m04 host status = "Stopped" (err=<nil>)
	I1202 15:39:48.045836  710560 status.go:384] host is not running, skipping remaining checks
	I1202 15:39:48.045843  710560 status.go:176] ha-825681-m04 status: &{Name:ha-825681-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (97.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-825681 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker: (1m36.81779229s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (97.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (48.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 node add --control-plane --alsologtostderr -v 5
E1202 15:41:46.435810  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/functional-169724/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:41:48.809610  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/addons-029941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:42:14.140331  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/functional-169724/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-825681 node add --control-plane --alsologtostderr -v 5: (47.993737147s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-825681 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (48.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.94s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (25.48s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-149757 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-149757 --driver=docker  --container-runtime=docker: (25.478277673s)
--- PASS: TestImageBuild/serial/Setup (25.48s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.01s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-149757
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-149757: (1.007216899s)
--- PASS: TestImageBuild/serial/NormalBuild (1.01s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.68s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-149757
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.68s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.49s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-149757
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.49s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.52s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-149757
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.52s)

                                                
                                    
x
+
TestJSONOutput/start/Command (70s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-381600 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=docker
E1202 15:42:55.507033  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/functional-049660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-381600 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=docker: (1m10.002628029s)
--- PASS: TestJSONOutput/start/Command (70.00s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.54s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-381600 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.54s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.53s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-381600 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.53s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.93s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-381600 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-381600 --output=json --user=testUser: (10.932515823s)
--- PASS: TestJSONOutput/stop/Command (10.93s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-543028 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-543028 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (80.866858ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b53d35de-01b1-49b9-ba2e-c40fd4d5dd8a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-543028] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"91c4a760-6b79-43ec-9336-a96e3566fd7c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22021"}}
	{"specversion":"1.0","id":"3598d95d-986a-4414-84d8-6c36ba7bc86a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0cdc1d2e-a7c9-40eb-8b40-e01ad4198928","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22021-563346/kubeconfig"}}
	{"specversion":"1.0","id":"ee90f072-16aa-4cad-8929-43cf9aefaca0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-563346/.minikube"}}
	{"specversion":"1.0","id":"5619ebde-b7f3-4c75-9e46-a299e26a9f5c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"4f986abc-efb5-47cc-bec8-d76cdb4ed303","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6586c2ea-a838-4975-8881-9a80728c68e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-543028" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-543028
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (23.35s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-470918 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-470918 --network=: (21.161112703s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-470918" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-470918
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-470918: (2.171383426s)
--- PASS: TestKicCustomNetwork/create_custom_network (23.35s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (22.82s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-776490 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-776490 --network=bridge: (20.743005646s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-776490" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-776490
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-776490: (2.060708218s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (22.82s)

                                                
                                    
x
+
TestKicExistingNetwork (25.84s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1202 15:45:05.194563  567092 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1202 15:45:05.212589  567092 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1202 15:45:05.212654  567092 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1202 15:45:05.212680  567092 cli_runner.go:164] Run: docker network inspect existing-network
W1202 15:45:05.232050  567092 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1202 15:45:05.232086  567092 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1202 15:45:05.232099  567092 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1202 15:45:05.232255  567092 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1202 15:45:05.250716  567092 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-640fd0a20f67 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4a:da:65:72:64:1d} reservation:<nil>}
I1202 15:45:05.251143  567092 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b3d340}
I1202 15:45:05.251171  567092 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1202 15:45:05.251242  567092 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1202 15:45:05.297446  567092 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-469157 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-469157 --network=existing-network: (23.643702567s)
helpers_test.go:175: Cleaning up "existing-network-469157" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-469157
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-469157: (2.058229911s)
I1202 15:45:31.017442  567092 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (25.84s)

                                                
                                    
x
+
TestKicCustomSubnet (24.56s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-586309 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-586309 --subnet=192.168.60.0/24: (22.355088898s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-586309 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-586309" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-586309
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-586309: (2.178320193s)
--- PASS: TestKicCustomSubnet (24.56s)

                                                
                                    
x
+
TestKicStaticIP (27.16s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-665814 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-665814 --static-ip=192.168.200.200: (24.786577383s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-665814 ip
helpers_test.go:175: Cleaning up "static-ip-665814" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-665814
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-665814: (2.208881246s)
--- PASS: TestKicStaticIP (27.16s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (51.96s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-642670 --driver=docker  --container-runtime=docker
E1202 15:46:46.435646  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/functional-169724/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-642670 --driver=docker  --container-runtime=docker: (23.892840346s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-645895 --driver=docker  --container-runtime=docker
E1202 15:46:48.808945  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/addons-029941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-645895 --driver=docker  --container-runtime=docker: (22.376187353s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-642670
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-645895
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-645895" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-645895
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-645895: (2.227594442s)
helpers_test.go:175: Cleaning up "first-642670" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-642670
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-642670: (2.165165157s)
--- PASS: TestMinikubeProfile (51.96s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.55s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-583297 --memory=3072 --mount-string /tmp/TestMountStartserial1153499623/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-583297 --memory=3072 --mount-string /tmp/TestMountStartserial1153499623/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (8.546497608s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.55s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-583297 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.64s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-601426 --memory=3072 --mount-string /tmp/TestMountStartserial1153499623/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-601426 --memory=3072 --mount-string /tmp/TestMountStartserial1153499623/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (5.64183274s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-601426 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.55s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-583297 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-583297 --alsologtostderr -v=5: (1.54841239s)
--- PASS: TestMountStart/serial/DeleteFirst (1.55s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-601426 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.30s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-601426
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-601426: (1.259791246s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.4s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-601426
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-601426: (7.400807103s)
--- PASS: TestMountStart/serial/RestartStopped (8.40s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-601426 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.30s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (78.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-313163 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=docker
E1202 15:47:55.506600  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/functional-049660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-313163 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=docker: (1m18.160091847s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313163 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (78.69s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-313163 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-313163 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-313163 -- rollout status deployment/busybox: (2.326035234s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-313163 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-313163 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-313163 -- exec busybox-7b57f96db7-fj22c -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-313163 -- exec busybox-7b57f96db7-wdwm6 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-313163 -- exec busybox-7b57f96db7-fj22c -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-313163 -- exec busybox-7b57f96db7-wdwm6 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-313163 -- exec busybox-7b57f96db7-fj22c -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-313163 -- exec busybox-7b57f96db7-wdwm6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.13s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-313163 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-313163 -- exec busybox-7b57f96db7-fj22c -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-313163 -- exec busybox-7b57f96db7-fj22c -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-313163 -- exec busybox-7b57f96db7-wdwm6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-313163 -- exec busybox-7b57f96db7-wdwm6 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.96s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (33.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-313163 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-313163 -v=5 --alsologtostderr: (33.240184255s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313163 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (33.96s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-313163 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.72s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313163 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313163 cp testdata/cp-test.txt multinode-313163:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313163 ssh -n multinode-313163 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313163 cp multinode-313163:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2045615320/001/cp-test_multinode-313163.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313163 ssh -n multinode-313163 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313163 cp multinode-313163:/home/docker/cp-test.txt multinode-313163-m02:/home/docker/cp-test_multinode-313163_multinode-313163-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313163 ssh -n multinode-313163 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313163 ssh -n multinode-313163-m02 "sudo cat /home/docker/cp-test_multinode-313163_multinode-313163-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313163 cp multinode-313163:/home/docker/cp-test.txt multinode-313163-m03:/home/docker/cp-test_multinode-313163_multinode-313163-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313163 ssh -n multinode-313163 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313163 ssh -n multinode-313163-m03 "sudo cat /home/docker/cp-test_multinode-313163_multinode-313163-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313163 cp testdata/cp-test.txt multinode-313163-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313163 ssh -n multinode-313163-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313163 cp multinode-313163-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2045615320/001/cp-test_multinode-313163-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313163 ssh -n multinode-313163-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313163 cp multinode-313163-m02:/home/docker/cp-test.txt multinode-313163:/home/docker/cp-test_multinode-313163-m02_multinode-313163.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313163 ssh -n multinode-313163-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313163 ssh -n multinode-313163 "sudo cat /home/docker/cp-test_multinode-313163-m02_multinode-313163.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313163 cp multinode-313163-m02:/home/docker/cp-test.txt multinode-313163-m03:/home/docker/cp-test_multinode-313163-m02_multinode-313163-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313163 ssh -n multinode-313163-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313163 ssh -n multinode-313163-m03 "sudo cat /home/docker/cp-test_multinode-313163-m02_multinode-313163-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313163 cp testdata/cp-test.txt multinode-313163-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313163 ssh -n multinode-313163-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313163 cp multinode-313163-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2045615320/001/cp-test_multinode-313163-m03.txt
E1202 15:49:51.876418  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/addons-029941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313163 ssh -n multinode-313163-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313163 cp multinode-313163-m03:/home/docker/cp-test.txt multinode-313163:/home/docker/cp-test_multinode-313163-m03_multinode-313163.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313163 ssh -n multinode-313163-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313163 ssh -n multinode-313163 "sudo cat /home/docker/cp-test_multinode-313163-m03_multinode-313163.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313163 cp multinode-313163-m03:/home/docker/cp-test.txt multinode-313163-m02:/home/docker/cp-test_multinode-313163-m03_multinode-313163-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313163 ssh -n multinode-313163-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313163 ssh -n multinode-313163-m02 "sudo cat /home/docker/cp-test_multinode-313163-m03_multinode-313163-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.78s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313163 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-313163 node stop m03: (1.298526147s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313163 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-313163 status: exit status 7 (529.189442ms)

                                                
                                                
-- stdout --
	multinode-313163
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-313163-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-313163-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313163 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-313163 status --alsologtostderr: exit status 7 (527.241318ms)

                                                
                                                
-- stdout --
	multinode-313163
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-313163-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-313163-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 15:49:56.297377  793682 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:49:56.297658  793682 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:49:56.297669  793682 out.go:374] Setting ErrFile to fd 2...
	I1202 15:49:56.297673  793682 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:49:56.297888  793682 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-563346/.minikube/bin
	I1202 15:49:56.298072  793682 out.go:368] Setting JSON to false
	I1202 15:49:56.298099  793682 mustload.go:66] Loading cluster: multinode-313163
	I1202 15:49:56.298264  793682 notify.go:221] Checking for updates...
	I1202 15:49:56.298455  793682 config.go:182] Loaded profile config "multinode-313163": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1202 15:49:56.298471  793682 status.go:174] checking status of multinode-313163 ...
	I1202 15:49:56.298957  793682 cli_runner.go:164] Run: docker container inspect multinode-313163 --format={{.State.Status}}
	I1202 15:49:56.318994  793682 status.go:371] multinode-313163 host status = "Running" (err=<nil>)
	I1202 15:49:56.319027  793682 host.go:66] Checking if "multinode-313163" exists ...
	I1202 15:49:56.319337  793682 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-313163
	I1202 15:49:56.338024  793682 host.go:66] Checking if "multinode-313163" exists ...
	I1202 15:49:56.338367  793682 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 15:49:56.338441  793682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-313163
	I1202 15:49:56.358672  793682 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33314 SSHKeyPath:/home/jenkins/minikube-integration/22021-563346/.minikube/machines/multinode-313163/id_rsa Username:docker}
	I1202 15:49:56.460600  793682 ssh_runner.go:195] Run: systemctl --version
	I1202 15:49:56.467369  793682 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 15:49:56.480945  793682 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 15:49:56.538915  793682 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:5 ContainersRunning:2 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-02 15:49:56.52929083 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 15:49:56.539482  793682 kubeconfig.go:125] found "multinode-313163" server: "https://192.168.67.2:8443"
	I1202 15:49:56.539515  793682 api_server.go:166] Checking apiserver status ...
	I1202 15:49:56.539553  793682 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 15:49:56.552863  793682 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2095/cgroup
	W1202 15:49:56.562972  793682 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2095/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1202 15:49:56.563045  793682 ssh_runner.go:195] Run: ls
	I1202 15:49:56.567519  793682 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1202 15:49:56.571956  793682 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1202 15:49:56.571992  793682 status.go:463] multinode-313163 apiserver status = Running (err=<nil>)
	I1202 15:49:56.572037  793682 status.go:176] multinode-313163 status: &{Name:multinode-313163 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 15:49:56.572064  793682 status.go:174] checking status of multinode-313163-m02 ...
	I1202 15:49:56.572435  793682 cli_runner.go:164] Run: docker container inspect multinode-313163-m02 --format={{.State.Status}}
	I1202 15:49:56.592214  793682 status.go:371] multinode-313163-m02 host status = "Running" (err=<nil>)
	I1202 15:49:56.592242  793682 host.go:66] Checking if "multinode-313163-m02" exists ...
	I1202 15:49:56.592529  793682 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-313163-m02
	I1202 15:49:56.609852  793682 host.go:66] Checking if "multinode-313163-m02" exists ...
	I1202 15:49:56.610151  793682 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 15:49:56.610225  793682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-313163-m02
	I1202 15:49:56.628568  793682 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33319 SSHKeyPath:/home/jenkins/minikube-integration/22021-563346/.minikube/machines/multinode-313163-m02/id_rsa Username:docker}
	I1202 15:49:56.727684  793682 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 15:49:56.740608  793682 status.go:176] multinode-313163-m02 status: &{Name:multinode-313163-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1202 15:49:56.740642  793682 status.go:174] checking status of multinode-313163-m03 ...
	I1202 15:49:56.740911  793682 cli_runner.go:164] Run: docker container inspect multinode-313163-m03 --format={{.State.Status}}
	I1202 15:49:56.760298  793682 status.go:371] multinode-313163-m03 host status = "Stopped" (err=<nil>)
	I1202 15:49:56.760324  793682 status.go:384] host is not running, skipping remaining checks
	I1202 15:49:56.760330  793682 status.go:176] multinode-313163-m03 status: &{Name:multinode-313163-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.36s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313163 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-313163 node start m03 -v=5 --alsologtostderr: (8.055002123s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313163 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.80s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (74.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-313163
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-313163
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-313163: (23.063966231s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-313163 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-313163 --wait=true -v=5 --alsologtostderr: (51.44026497s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-313163
--- PASS: TestMultiNode/serial/RestartKeepsNodes (74.64s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313163 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-313163 node delete m03: (4.773238839s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313163 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.41s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313163 stop
E1202 15:51:46.434835  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/functional-169724/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-313163 stop: (21.741332419s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313163 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-313163 status: exit status 7 (100.903902ms)

                                                
                                                
-- stdout --
	multinode-313163
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-313163-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313163 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-313163 status --alsologtostderr: exit status 7 (102.742994ms)

                                                
                                                
-- stdout --
	multinode-313163
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-313163-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 15:51:47.512294  808518 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:51:47.512626  808518 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:51:47.512633  808518 out.go:374] Setting ErrFile to fd 2...
	I1202 15:51:47.512637  808518 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:51:47.512834  808518 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-563346/.minikube/bin
	I1202 15:51:47.513001  808518 out.go:368] Setting JSON to false
	I1202 15:51:47.513029  808518 mustload.go:66] Loading cluster: multinode-313163
	I1202 15:51:47.513220  808518 notify.go:221] Checking for updates...
	I1202 15:51:47.513381  808518 config.go:182] Loaded profile config "multinode-313163": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1202 15:51:47.513396  808518 status.go:174] checking status of multinode-313163 ...
	I1202 15:51:47.513829  808518 cli_runner.go:164] Run: docker container inspect multinode-313163 --format={{.State.Status}}
	I1202 15:51:47.535883  808518 status.go:371] multinode-313163 host status = "Stopped" (err=<nil>)
	I1202 15:51:47.535907  808518 status.go:384] host is not running, skipping remaining checks
	I1202 15:51:47.535914  808518 status.go:176] multinode-313163 status: &{Name:multinode-313163 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 15:51:47.535949  808518 status.go:174] checking status of multinode-313163-m02 ...
	I1202 15:51:47.536240  808518 cli_runner.go:164] Run: docker container inspect multinode-313163-m02 --format={{.State.Status}}
	I1202 15:51:47.554356  808518 status.go:371] multinode-313163-m02 host status = "Stopped" (err=<nil>)
	I1202 15:51:47.554379  808518 status.go:384] host is not running, skipping remaining checks
	I1202 15:51:47.554386  808518 status.go:176] multinode-313163-m02 status: &{Name:multinode-313163-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.95s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (46.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-313163 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=docker
E1202 15:51:48.809492  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/addons-029941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-313163 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=docker: (45.723690279s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313163 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (46.34s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (28.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-313163
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-313163-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-313163-m02 --driver=docker  --container-runtime=docker: exit status 14 (76.125246ms)

                                                
                                                
-- stdout --
	* [multinode-313163-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22021
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22021-563346/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-563346/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-313163-m02' is duplicated with machine name 'multinode-313163-m02' in profile 'multinode-313163'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-313163-m03 --driver=docker  --container-runtime=docker
E1202 15:52:55.507108  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/functional-049660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-313163-m03 --driver=docker  --container-runtime=docker: (25.657235973s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-313163
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-313163: exit status 80 (316.664707ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-313163 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-313163-m03 already exists in multinode-313163-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-313163-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-313163-m03: (2.262491373s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (28.37s)

                                                
                                    
x
+
TestPreload (135.53s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-509494 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker
E1202 15:53:09.502266  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/functional-169724/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-509494 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker: (1m10.811797282s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-509494 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 -p test-preload-509494 image pull gcr.io/k8s-minikube/busybox: (1.729774379s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-509494
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-509494: (10.921465766s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-509494 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-509494 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (49.538590578s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-509494 image list
helpers_test.go:175: Cleaning up "test-preload-509494" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-509494
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-509494: (2.278438544s)
--- PASS: TestPreload (135.53s)

                                                
                                    
x
+
TestScheduledStopUnix (97.37s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-538116 --memory=3072 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-538116 --memory=3072 --driver=docker  --container-runtime=docker: (24.020602687s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-538116 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1202 15:55:45.990467  833419 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:55:45.990798  833419 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:55:45.990809  833419 out.go:374] Setting ErrFile to fd 2...
	I1202 15:55:45.990813  833419 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:55:45.991043  833419 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-563346/.minikube/bin
	I1202 15:55:45.991292  833419 out.go:368] Setting JSON to false
	I1202 15:55:45.991384  833419 mustload.go:66] Loading cluster: scheduled-stop-538116
	I1202 15:55:45.991681  833419 config.go:182] Loaded profile config "scheduled-stop-538116": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1202 15:55:45.991771  833419 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/scheduled-stop-538116/config.json ...
	I1202 15:55:45.991950  833419 mustload.go:66] Loading cluster: scheduled-stop-538116
	I1202 15:55:45.992058  833419 config.go:182] Loaded profile config "scheduled-stop-538116": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-538116 -n scheduled-stop-538116
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-538116 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1202 15:55:46.407540  833566 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:55:46.407696  833566 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:55:46.407707  833566 out.go:374] Setting ErrFile to fd 2...
	I1202 15:55:46.407714  833566 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:55:46.407940  833566 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-563346/.minikube/bin
	I1202 15:55:46.408220  833566 out.go:368] Setting JSON to false
	I1202 15:55:46.408446  833566 daemonize_unix.go:73] killing process 833453 as it is an old scheduled stop
	I1202 15:55:46.408559  833566 mustload.go:66] Loading cluster: scheduled-stop-538116
	I1202 15:55:46.408905  833566 config.go:182] Loaded profile config "scheduled-stop-538116": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1202 15:55:46.409007  833566 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/scheduled-stop-538116/config.json ...
	I1202 15:55:46.409255  833566 mustload.go:66] Loading cluster: scheduled-stop-538116
	I1202 15:55:46.409396  833566 config.go:182] Loaded profile config "scheduled-stop-538116": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1202 15:55:46.416331  567092 retry.go:31] will retry after 64.412µs: open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/scheduled-stop-538116/pid: no such file or directory
I1202 15:55:46.417493  567092 retry.go:31] will retry after 139.467µs: open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/scheduled-stop-538116/pid: no such file or directory
I1202 15:55:46.418654  567092 retry.go:31] will retry after 182.094µs: open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/scheduled-stop-538116/pid: no such file or directory
I1202 15:55:46.419862  567092 retry.go:31] will retry after 222.276µs: open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/scheduled-stop-538116/pid: no such file or directory
I1202 15:55:46.421060  567092 retry.go:31] will retry after 431.053µs: open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/scheduled-stop-538116/pid: no such file or directory
I1202 15:55:46.422342  567092 retry.go:31] will retry after 797.482µs: open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/scheduled-stop-538116/pid: no such file or directory
I1202 15:55:46.423611  567092 retry.go:31] will retry after 1.611792ms: open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/scheduled-stop-538116/pid: no such file or directory
I1202 15:55:46.425935  567092 retry.go:31] will retry after 881.277µs: open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/scheduled-stop-538116/pid: no such file or directory
I1202 15:55:46.427125  567092 retry.go:31] will retry after 1.307941ms: open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/scheduled-stop-538116/pid: no such file or directory
I1202 15:55:46.429417  567092 retry.go:31] will retry after 4.427522ms: open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/scheduled-stop-538116/pid: no such file or directory
I1202 15:55:46.434773  567092 retry.go:31] will retry after 7.259261ms: open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/scheduled-stop-538116/pid: no such file or directory
I1202 15:55:46.443065  567092 retry.go:31] will retry after 8.989993ms: open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/scheduled-stop-538116/pid: no such file or directory
I1202 15:55:46.452740  567092 retry.go:31] will retry after 10.748807ms: open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/scheduled-stop-538116/pid: no such file or directory
I1202 15:55:46.464093  567092 retry.go:31] will retry after 15.850329ms: open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/scheduled-stop-538116/pid: no such file or directory
I1202 15:55:46.480544  567092 retry.go:31] will retry after 24.906711ms: open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/scheduled-stop-538116/pid: no such file or directory
I1202 15:55:46.505858  567092 retry.go:31] will retry after 24.433231ms: open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/scheduled-stop-538116/pid: no such file or directory
I1202 15:55:46.531162  567092 retry.go:31] will retry after 54.902441ms: open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/scheduled-stop-538116/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-538116 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
E1202 15:55:58.576458  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/functional-049660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-538116 -n scheduled-stop-538116
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-538116
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-538116 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1202 15:56:12.418627  834498 out.go:360] Setting OutFile to fd 1 ...
	I1202 15:56:12.418763  834498 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:56:12.418773  834498 out.go:374] Setting ErrFile to fd 2...
	I1202 15:56:12.418780  834498 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 15:56:12.419003  834498 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-563346/.minikube/bin
	I1202 15:56:12.419296  834498 out.go:368] Setting JSON to false
	I1202 15:56:12.419399  834498 mustload.go:66] Loading cluster: scheduled-stop-538116
	I1202 15:56:12.419760  834498 config.go:182] Loaded profile config "scheduled-stop-538116": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1202 15:56:12.419861  834498 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/scheduled-stop-538116/config.json ...
	I1202 15:56:12.420084  834498 mustload.go:66] Loading cluster: scheduled-stop-538116
	I1202 15:56:12.420221  834498 config.go:182] Loaded profile config "scheduled-stop-538116": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
E1202 15:56:46.435024  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/functional-169724/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 15:56:48.809803  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/addons-029941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-538116
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-538116: exit status 7 (87.899843ms)

                                                
                                                
-- stdout --
	scheduled-stop-538116
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-538116 -n scheduled-stop-538116
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-538116 -n scheduled-stop-538116: exit status 7 (86.351899ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-538116" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-538116
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-538116: (1.679525273s)
--- PASS: TestScheduledStopUnix (97.37s)

                                                
                                    
x
+
TestSkaffold (76.99s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe581447635 version
skaffold_test.go:63: skaffold version: v2.17.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-475198 --memory=3072 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-475198 --memory=3072 --driver=docker  --container-runtime=docker: (21.756189138s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe581447635 run --minikube-profile skaffold-475198 --kube-context skaffold-475198 --status-check=true --port-forward=false --interactive=false
E1202 15:57:55.509800  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/functional-049660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe581447635 run --minikube-profile skaffold-475198 --kube-context skaffold-475198 --status-check=true --port-forward=false --interactive=false: (40.114575578s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:352: "leeroy-app-55d66f4bf8-vjhp5" [9455d8a0-7844-4076-a268-1f5a46235033] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.0034238s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:352: "leeroy-web-7cf6c8565-zwfql" [80a6fa8d-3309-4d6c-8faa-c769917f4349] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.00369413s
helpers_test.go:175: Cleaning up "skaffold-475198" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-475198
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-475198: (3.295537633s)
--- PASS: TestSkaffold (76.99s)

                                                
                                    
x
+
TestInsufficientStorage (9.53s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-974307 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-974307 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (7.198076445s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a4e967cd-619c-4271-9e8e-3004180d1a86","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-974307] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3bd80ebc-02f8-4b58-95c4-810acf00f456","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22021"}}
	{"specversion":"1.0","id":"f17c9c88-484f-4b51-97bc-824e35f01d09","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"eea1995d-3392-4b5c-b17e-4e952313fd24","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22021-563346/kubeconfig"}}
	{"specversion":"1.0","id":"823141c4-e9dd-4970-9a38-5db8453f0c45","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-563346/.minikube"}}
	{"specversion":"1.0","id":"70287708-024c-4ed5-8004-fa73d6222cc5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"b9e25ea7-2493-41a7-99de-7e4209e47c5b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"122e7212-1c14-495e-8c74-ce35819412bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"91d64540-3a92-4070-b29c-0688888073f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"0af1a0b5-a408-45e4-946f-dda269eb7947","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"debfa48d-6e11-4be5-888d-a7b56f05db49","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"adf30ae4-60ae-4c15-8857-f746ff0849af","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-974307\" primary control-plane node in \"insufficient-storage-974307\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"3b7dcfd0-f66a-4fd0-9468-a152879a0af4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1764169655-21974 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"d043a7e2-1173-433a-b160-890c6923db64","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"707183f5-f0f0-4d16-9ae8-65d69a518e43","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-974307 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-974307 --output=json --layout=cluster: exit status 7 (294.590211ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-974307","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-974307","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1202 15:58:23.745104  846487 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-974307" does not appear in /home/jenkins/minikube-integration/22021-563346/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-974307 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-974307 --output=json --layout=cluster: exit status 7 (297.92429ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-974307","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-974307","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1202 15:58:24.043247  846596 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-974307" does not appear in /home/jenkins/minikube-integration/22021-563346/kubeconfig
	E1202 15:58:24.053637  846596 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/insufficient-storage-974307/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-974307" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-974307
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-974307: (1.739633309s)
--- PASS: TestInsufficientStorage (9.53s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (326.74s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.244333330 start -p running-upgrade-520609 --memory=3072 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.244333330 start -p running-upgrade-520609 --memory=3072 --vm-driver=docker  --container-runtime=docker: (24.38212903s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-520609 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-520609 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m59.412334103s)
helpers_test.go:175: Cleaning up "running-upgrade-520609" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-520609
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-520609: (2.321515992s)
--- PASS: TestRunningBinaryUpgrade (326.74s)

                                                
                                    
x
+
TestKubernetesUpgrade (348.85s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-213058 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-213058 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (26.109121958s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-213058
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-213058: (10.927318537s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-213058 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-213058 status --format={{.Host}}: exit status 7 (89.461297ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-213058 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-213058 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m34.827434981s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-213058 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-213058 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-213058 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker: exit status 106 (96.049661ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-213058] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22021
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22021-563346/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-563346/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-beta.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-213058
	    minikube start -p kubernetes-upgrade-213058 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2130582 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-213058 --kubernetes-version=v1.35.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-213058 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E1202 16:05:45.806067  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/skaffold-475198/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-213058 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (34.124875719s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-213058" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-213058
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-213058: (2.61314341s)
--- PASS: TestKubernetesUpgrade (348.85s)

                                                
                                    
x
+
TestMissingContainerUpgrade (99.66s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.3466151013 start -p missing-upgrade-720035 --memory=3072 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.3466151013 start -p missing-upgrade-720035 --memory=3072 --driver=docker  --container-runtime=docker: (45.639875101s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-720035
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-720035: (10.470490942s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-720035
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-720035 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-720035 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (39.076563465s)
helpers_test.go:175: Cleaning up "missing-upgrade-720035" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-720035
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-720035: (3.67053414s)
--- PASS: TestMissingContainerUpgrade (99.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-654628 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-654628 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker: exit status 14 (86.532178ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-654628] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22021
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22021-563346/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-563346/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.62s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (43.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-654628 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-654628 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (43.218434319s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-654628 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (43.65s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (337.65s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.804969650 start -p stopped-upgrade-698218 --memory=3072 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.804969650 start -p stopped-upgrade-698218 --memory=3072 --vm-driver=docker  --container-runtime=docker: (46.102019818s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.804969650 -p stopped-upgrade-698218 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.804969650 -p stopped-upgrade-698218 stop: (10.78048982s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-698218 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-698218 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m40.769470562s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (337.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-654628 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-654628 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (14.42494843s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-654628 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-654628 status -o json: exit status 2 (350.528826ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-654628","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-654628
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-654628: (1.790482232s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-654628 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-654628 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (8.663110048s)
--- PASS: TestNoKubernetes/serial/Start (8.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22021-563346/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-654628 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-654628 "sudo systemctl is-active --quiet service kubelet": exit status 1 (343.48923ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-amd64 profile list: (16.313476822s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:204: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (14.961223931s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-654628
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-654628: (1.88400836s)
--- PASS: TestNoKubernetes/serial/Stop (1.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-654628 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-654628 --driver=docker  --container-runtime=docker: (8.293362597s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-654628 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-654628 "sudo systemctl is-active --quiet service kubelet": exit status 1 (324.060722ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.31s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-698218
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-698218: (1.311334498s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.31s)

                                                
                                    
x
+
TestPause/serial/Start (72.44s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-261508 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-261508 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m12.43973098s)
--- PASS: TestPause/serial/Start (72.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (69.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-333548 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-333548 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m9.313960578s)
--- PASS: TestNetworkPlugins/group/auto/Start (69.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (53.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-333548 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-333548 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (53.330872499s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (53.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (67.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-333548 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-333548 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m7.30360398s)
--- PASS: TestNetworkPlugins/group/calico/Start (67.30s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (52.71s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-261508 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-261508 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (52.687830419s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (52.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-333548 "pgrep -a kubelet"
I1202 16:06:12.306481  567092 config.go:182] Loaded profile config "auto-333548": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-333548 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-dsqbv" [681a98ff-2a02-4246-87ab-f6e28a860749] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-dsqbv" [681a98ff-2a02-4246-87ab-f6e28a860749] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003543745s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-333548 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-333548 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-333548 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-htchb" [4952a71e-c557-4d51-8140-71eb93b4e546] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005529979s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (43.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-333548 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
E1202 16:06:46.435793  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/functional-169724/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-333548 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (43.1636698s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (43.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-333548 "pgrep -a kubelet"
I1202 16:06:48.597758  567092 config.go:182] Loaded profile config "kindnet-333548": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-333548 replace --force -f testdata/netcat-deployment.yaml
E1202 16:06:48.809569  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/addons-029941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1202 16:06:49.348990  567092 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-vhtxr" [380f9dc6-fa00-48a6-9f33-6108b1a1a062] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-vhtxr" [380f9dc6-fa00-48a6-9f33-6108b1a1a062] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004645519s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-333548 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-333548 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-333548 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestPause/serial/Pause (0.56s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-261508 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.56s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.37s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-261508 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-261508 --output=json --layout=cluster: exit status 2 (374.264408ms)

                                                
                                                
-- stdout --
	{"Name":"pause-261508","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-261508","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-fz57w" [f24ce2b7-bb43-4451-a853-7f57bf4ccfd8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003663529s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.65s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-261508 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.65s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.63s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-261508 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.63s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.59s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-261508 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-261508 --alsologtostderr -v=5: (2.594430437s)
--- PASS: TestPause/serial/DeletePaused (2.59s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (2.54s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (2.459953621s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-261508
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-261508: exit status 1 (24.610266ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-261508: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (2.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-333548 "pgrep -a kubelet"
I1202 16:07:10.415262  567092 config.go:182] Loaded profile config "calico-333548": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-333548 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-lzgtc" [e4d73d26-ad60-4b80-8961-1b2709a7fa12] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-lzgtc" [e4d73d26-ad60-4b80-8961-1b2709a7fa12] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004445013s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (42.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-333548 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-333548 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (42.258712542s)
--- PASS: TestNetworkPlugins/group/false/Start (42.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (41.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-333548 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-333548 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (41.060308461s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (41.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-333548 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-333548 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-333548 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-333548 "pgrep -a kubelet"
I1202 16:07:28.860403  567092 config.go:182] Loaded profile config "custom-flannel-333548": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-333548 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-q6hz7" [d515813e-1ab4-4f71-97cd-6bc5ec8d5e6b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-q6hz7" [d515813e-1ab4-4f71-97cd-6bc5ec8d5e6b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004625419s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-333548 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-333548 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-333548 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (42.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-333548 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-333548 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (42.957244007s)
--- PASS: TestNetworkPlugins/group/flannel/Start (42.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-333548 "pgrep -a kubelet"
I1202 16:07:53.503612  567092 config.go:182] Loaded profile config "false-333548": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-333548 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-6qcd7" [37acb7c0-c5d8-488f-b8cd-7caa3a919845] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1202 16:07:55.507130  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/functional-049660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-6qcd7" [37acb7c0-c5d8-488f-b8cd-7caa3a919845] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 9.00428937s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-333548 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-333548 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-333548 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (67.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-333548 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-333548 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m7.909996593s)
--- PASS: TestNetworkPlugins/group/bridge/Start (67.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-333548 "pgrep -a kubelet"
I1202 16:08:03.744226  567092 config.go:182] Loaded profile config "enable-default-cni-333548": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-333548 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-q54tk" [1cb32e6c-a804-4f9d-950f-6c6dd481bacb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-q54tk" [1cb32e6c-a804-4f9d-950f-6c6dd481bacb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.005557271s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-333548 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-333548 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-333548 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (68.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-333548 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-333548 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m8.280402498s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (68.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-96xz7" [1be33669-130c-49cc-9043-b255f5dbeb22] Running
E1202 16:08:29.647962  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/skaffold-475198/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004601179s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-333548 "pgrep -a kubelet"
I1202 16:08:35.998146  567092 config.go:182] Loaded profile config "flannel-333548": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-333548 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-xr5qd" [3395cecc-cf8a-4258-a935-e60ddd78007c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-xr5qd" [3395cecc-cf8a-4258-a935-e60ddd78007c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004189554s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (81.71s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-686980 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-686980 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0: (1m21.71078256s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (81.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-333548 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-333548 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-333548 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (68.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-340190 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-340190 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0: (1m8.516040231s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (68.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-333548 "pgrep -a kubelet"
I1202 16:09:11.264936  567092 config.go:182] Loaded profile config "bridge-333548": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-333548 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-tfnbx" [1e40b44b-4bf9-451e-90ec-6c6c8384f26e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-tfnbx" [1e40b44b-4bf9-451e-90ec-6c6c8384f26e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.004575628s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-333548 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-333548 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-333548 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-333548 "pgrep -a kubelet"
I1202 16:09:34.195005  567092 config.go:182] Loaded profile config "kubenet-333548": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-333548 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-rhl6x" [32b39df0-a993-412e-a9fa-d4f5526d5ff9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-rhl6x" [32b39df0-a993-412e-a9fa-d4f5526d5ff9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 9.004885717s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (9.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (39.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-904546 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-904546 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.2: (39.342643354s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (39.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-333548 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-333548 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-333548 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.14s)
E1202 16:11:48.809653  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/addons-029941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-686980 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [10f8eb0c-488e-4290-8e95-87d80d6dec2a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [10f8eb0c-488e-4290-8e95-87d80d6dec2a] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.004056023s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-686980 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (63.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-983569 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-983569 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.2: (1m3.133674489s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (63.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-686980 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-686980 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.499270494s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-686980 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-686980 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-686980 --alsologtostderr -v=3: (11.145544049s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-340190 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [c706b7dc-e3c5-4823-8e95-099005365552] Pending
helpers_test.go:352: "busybox" [c706b7dc-e3c5-4823-8e95-099005365552] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [c706b7dc-e3c5-4823-8e95-099005365552] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004473546s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-340190 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-904546 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [22613d73-4e71-4ffd-a0b4-d5e94edd80bd] Pending
helpers_test.go:352: "busybox" [22613d73-4e71-4ffd-a0b4-d5e94edd80bd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [22613d73-4e71-4ffd-a0b4-d5e94edd80bd] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004843512s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-904546 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-686980 -n old-k8s-version-686980
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-686980 -n old-k8s-version-686980: exit status 7 (110.495799ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-686980 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (53.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-686980 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-686980 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0: (53.085541227s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-686980 -n old-k8s-version-686980
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (53.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-340190 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-340190 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-340190 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-340190 --alsologtostderr -v=3: (11.194060527s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-904546 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-904546 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-904546 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-904546 --alsologtostderr -v=3: (11.113010693s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-340190 -n no-preload-340190
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-340190 -n no-preload-340190: exit status 7 (99.639809ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-340190 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (49.8s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-340190 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-340190 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0: (49.3928394s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-340190 -n no-preload-340190
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (49.80s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-904546 -n embed-certs-904546
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-904546 -n embed-certs-904546: exit status 7 (93.566345ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-904546 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (49.71s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-904546 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-904546 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.2: (49.315096087s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-904546 -n embed-certs-904546
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (49.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-983569 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [0d7faff4-ff00-435a-8e84-f7ffab170a6d] Pending
helpers_test.go:352: "busybox" [0d7faff4-ff00-435a-8e84-f7ffab170a6d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [0d7faff4-ff00-435a-8e84-f7ffab170a6d] Running
E1202 16:11:12.594478  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/auto-333548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:11:12.600994  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/auto-333548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:11:12.612497  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/auto-333548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:11:12.633975  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/auto-333548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:11:12.675659  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/auto-333548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:11:12.757285  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/auto-333548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:11:12.919227  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/auto-333548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:11:13.240897  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/auto-333548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:11:13.882385  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/auto-333548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003677713s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-983569 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-sdjxh" [05186ec4-89f8-4164-b4ea-05ed9a4d8f8f] Running
E1202 16:11:15.164894  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/auto-333548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003717763s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-983569 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1202 16:11:17.726584  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/auto-333548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-983569 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.86s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-983569 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-983569 --alsologtostderr -v=3: (11.151478518s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-sdjxh" [05186ec4-89f8-4164-b4ea-05ed9a4d8f8f] Running
E1202 16:11:22.849724  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/auto-333548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00503049s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-686980 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-686980 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.74s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-686980 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-686980 -n old-k8s-version-686980
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-686980 -n old-k8s-version-686980: exit status 2 (353.810859ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-686980 -n old-k8s-version-686980
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-686980 -n old-k8s-version-686980: exit status 2 (364.217671ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-686980 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-686980 -n old-k8s-version-686980
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-686980 -n old-k8s-version-686980
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.74s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-b84665fb8-z2m58" [ea2419f9-10e9-49c5-bdeb-310c3d62fcc7] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003488174s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-983569 -n default-k8s-diff-port-983569
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-983569 -n default-k8s-diff-port-983569: exit status 7 (160.363523ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-983569 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (49.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-983569 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-983569 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.2: (49.251173825s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-983569 -n default-k8s-diff-port-983569
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (49.61s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-djr5j" [a066eb24-c07f-4547-9325-7de71fad5cc5] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003861494s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (32.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-090162 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0
E1202 16:11:33.091153  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/auto-333548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-090162 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0: (32.136014257s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (32.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-b84665fb8-z2m58" [ea2419f9-10e9-49c5-bdeb-310c3d62fcc7] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004680894s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-340190 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-djr5j" [a066eb24-c07f-4547-9325-7de71fad5cc5] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006463094s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-904546 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
E1202 16:11:42.130220  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/kindnet-333548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-340190 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-340190 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-340190 -n no-preload-340190
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-340190 -n no-preload-340190: exit status 2 (519.047391ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-340190 -n no-preload-340190
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-340190 -n no-preload-340190: exit status 2 (471.692139ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-340190 --alsologtostderr -v=1
E1202 16:11:42.046788  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/kindnet-333548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:11:42.054130  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/kindnet-333548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:11:42.066006  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/kindnet-333548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:11:42.087911  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/kindnet-333548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p no-preload-340190 --alsologtostderr -v=1: (1.034401886s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-340190 -n no-preload-340190
E1202 16:11:43.337292  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/kindnet-333548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-340190 -n no-preload-340190
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-904546 image list --format=json
E1202 16:11:42.212125  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/kindnet-333548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:11:42.373738  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/kindnet-333548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (4.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-904546 --alsologtostderr -v=1
E1202 16:11:42.695364  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/kindnet-333548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-904546 --alsologtostderr -v=1: (1.053931276s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-904546 -n embed-certs-904546
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-904546 -n embed-certs-904546: exit status 2 (488.104334ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-904546 -n embed-certs-904546
E1202 16:11:44.619145  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/kindnet-333548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-904546 -n embed-certs-904546: exit status 2 (529.926872ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-904546 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-904546 -n embed-certs-904546
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-904546 -n embed-certs-904546
E1202 16:11:46.434822  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/functional-169724/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/embed-certs/serial/Pause (4.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-090162 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1202 16:12:04.053619  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/calico-333548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:12:04.060122  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/calico-333548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:12:04.071603  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/calico-333548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:12:04.093136  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/calico-333548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:12:04.134675  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/calico-333548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:12:04.216369  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/calico-333548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:12:04.377880  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/calico-333548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.83s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-090162 --alsologtostderr -v=3
E1202 16:12:04.699655  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/calico-333548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:12:05.341801  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/calico-333548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:12:06.623838  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/calico-333548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:12:09.185510  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/calico-333548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:12:14.307791  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/calico-333548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-090162 --alsologtostderr -v=3: (11.059212788s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-090162 -n newest-cni-090162
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-090162 -n newest-cni-090162: exit status 7 (92.201759ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-090162 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (13.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-090162 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-090162 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0: (12.887900771s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-090162 -n newest-cni-090162
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (13.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-hlzsh" [b17e4608-2df7-4c3f-9bb8-3ef05392f789] Running
E1202 16:12:23.026511  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/kindnet-333548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:12:24.549838  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/calico-333548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004086093s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-hlzsh" [b17e4608-2df7-4c3f-9bb8-3ef05392f789] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004843629s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-983569 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-090162 image list --format=json
E1202 16:12:29.079297  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/custom-flannel-333548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:12:29.085750  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/custom-flannel-333548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:12:29.097315  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/custom-flannel-333548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:12:29.118810  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/custom-flannel-333548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:12:29.160342  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/custom-flannel-333548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:12:29.241897  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/custom-flannel-333548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-090162 --alsologtostderr -v=1
E1202 16:12:29.403987  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/custom-flannel-333548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 16:12:29.726280  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/custom-flannel-333548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-090162 -n newest-cni-090162
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-090162 -n newest-cni-090162: exit status 2 (358.354035ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-090162 -n newest-cni-090162
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-090162 -n newest-cni-090162: exit status 2 (355.843232ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-090162 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-090162 -n newest-cni-090162
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-090162 -n newest-cni-090162
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-983569 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-983569 --alsologtostderr -v=1
E1202 16:12:30.368617  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/custom-flannel-333548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-983569 -n default-k8s-diff-port-983569
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-983569 -n default-k8s-diff-port-983569: exit status 2 (459.265423ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-983569 -n default-k8s-diff-port-983569
E1202 16:12:31.650482  567092 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/custom-flannel-333548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-983569 -n default-k8s-diff-port-983569: exit status 2 (377.957383ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-983569 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-983569 -n default-k8s-diff-port-983569
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-983569 -n default-k8s-diff-port-983569
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.98s)

                                                
                                    

Test skip (28/435)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0.06
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
117 TestFunctional/parallel/PodmanEnv 0
130 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
132 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
264 TestGvisorAddon 0
293 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
294 TestISOImage 0
358 TestChangeNoneUser 0
361 TestScheduledStopWindows 0
393 TestNetworkPlugins/group/cilium 4.84
399 TestStartStop/group/disable-driver-mounts 0.22
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1202 15:10:10.891570  567092 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
W1202 15:10:10.938125  567092 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 status code: 404
W1202 15:10:10.953983  567092 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 status code: 404
aaa_download_only_test.go:113: No preload image
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-333548 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-333548

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-333548

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-333548

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-333548

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-333548

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-333548

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-333548

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-333548

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-333548

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-333548

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-333548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333548"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-333548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333548"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-333548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333548"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-333548

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-333548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333548"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-333548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333548"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-333548" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-333548" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-333548" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-333548" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-333548" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-333548" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-333548" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-333548" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-333548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333548"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-333548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333548"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-333548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333548"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-333548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333548"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-333548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333548"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-333548

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-333548

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-333548" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-333548" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-333548

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-333548

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-333548" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-333548" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-333548" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-333548" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-333548" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-333548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333548"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-333548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333548"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-333548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333548"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-333548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333548"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-333548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333548"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22021-563346/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 02 Dec 2025 15:59:32 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: stopped-upgrade-698218
contexts:
- context:
cluster: stopped-upgrade-698218
user: stopped-upgrade-698218
name: stopped-upgrade-698218
current-context: ""
kind: Config
users:
- name: stopped-upgrade-698218
user:
client-certificate: /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/stopped-upgrade-698218/client.crt
client-key: /home/jenkins/minikube-integration/22021-563346/.minikube/profiles/stopped-upgrade-698218/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-333548

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-333548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333548"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-333548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333548"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-333548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333548"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-333548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333548"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-333548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333548"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-333548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333548"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-333548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333548"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-333548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333548"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-333548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333548"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-333548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333548"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-333548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333548"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-333548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333548"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-333548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333548"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-333548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333548"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-333548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333548"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-333548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333548"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-333548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333548"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-333548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333548"

                                                
                                                
----------------------- debugLogs end: cilium-333548 [took: 4.617919314s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-333548" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-333548
--- SKIP: TestNetworkPlugins/group/cilium (4.84s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-458909" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-458909
--- SKIP: TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                    
Copied to clipboard